Enterprise AI is coming, and it's about to learn all the wrong lessons about marketing effectiveness
🚨 TL;DR: Most advertisers will train AI systems on flawed campaign measurement data, risking a generation of misinformed media planning.
By Rick Bruner, CEO, Central Control
Last week's I-COM Global summit was, once again, a highlight of the year for me. While its setting is always spectacular (Menorca was no exception) and the fine food and wine certainly helped, what truly distinguishes I-COM is the quality of the discussions, thanks to the seniority and expertise of attendees from across the ad ecosystem.
The dominant theme this year: large organizations preparing for enterprise-scale AI. We heard from brands including ASR Nederland, AXA, Bolt, BRF, Coca-Cola, Diageo, Dun & Bradstreet, Haleon, IKEA, Jaguar, Mars, Matalan, Nestlé, Reckitt, Red Bull Racing, Sonae, Unilever, and Volkswagen about using AI to optimize virtually every facet of marketing: user attention, content creation, creative assets, CRM, customer engagement, customer insights, customer journey, data governance, personalization, product catalogs, sales leads, social media optimization, and more.
Two standout keynotes, by Nestlé's Head of Data and Marketing Analytics, Isabelle Lacarce-Paumier, and Mastercard's Fellow of Data & AI, JoAnn Stonier, focused on a critical point: AI’s success hinges on the quality of its training data. Every analyst knows that 90% of insights work is cleaning and preparing the data.
The situation couldn’t be more urgent. My longtime friend Andy Fisher, one of the few industry experts with more experience running randomized controlled experiments than I, pointed out that most companies still don't use high-quality tests to measure advertising ROI. As a result, they’re about to embed flawed campaign conclusions into AI-driven planning tools, creating a knowledge debt that could take decades to rectify.
As I’ve written here recently, most advertisers still rely on quasi-experiments at best, or decade-old attribution models at worst. Even today’s favored quasi-methods — synthetic controls, debiased ML, stratified propensity scores — offer only the illusion of experimental rigor, often delivering systematically biased results.
By contrast, randomized controlled trials, especially large-scale geo tests, remain the most reliable evidence for determining media effectiveness. Yet they’re underused and underappreciated.
Why? Because quasi-experiments are “customer pleasing,” as an MMM expert recently put it. They skew positive, so much so that Meta now mandates synthetic control methods (via its GeoTest R package) for official experimentation partners, one told me the other day, because the results are reliably favorable to Meta's media.
That might be fine if those tests stayed in the drawer as one-off vanity metrics. But with enterprise AI, they won’t. They’ll be unearthed and fed, by the dozens or hundreds, into new automated planning systems, training the next generation of tools on false signals and leading to years of misallocated media spend.
The principle is simple: garbage in, garbage out.
Saying goodbye to the Mediterranean for another year, all the Manchego and saffron bulging in my suitcase couldn't soothe my unease about the future of marketing performance. It’s time to standardize on real evidence. Randomized experiments should be the norm, not the exception, for ROAS testing. The future of AI-assisted media planning depends on it.