The Future of Measuring Advertising ROI is "MPE": Models Plus Experiments

Marketing ROI measurement is going through a generational transformation right now. Following MMM (marketing/media mix models in the ‘90s), MTA (multi-touch attribution in the 2000s) now comes a new emerging best practice I call “MPE”: Models Plus Experiments.

MPE refers to a process of constant improvement to existing marketing ROI models, particularly MMM, by fine tuning model assumptions and coefficients with a practice of regular experiments for measuring incremental ROI.

The Gold Standard Is Not a Silver Bullet

Scientists consider the type of experiment known as a “randomized controlled trial” (RCT) to be the “gold standard” for measuring cause and effect, but the approach is not a silver bullet for advertisers.

Equivalent to “clinical trials” for proving efficacy in medicine – where outcomes of a test group are compared against those of a control group, where, critically, the test and control groups are assigned by a random process before the intervention of the experiment – RCT has a reputation among advertisers as being difficult to implement.

That concern is overstated, however. Running good media experiments is a lot easier than most advertising practitioners think. Industry measurement experts have made a lot of progress in the past decade, including the revolutionary ad-experimentation technique known as "ghost ads," which is widely deployed by most biggest digital media companies, generally for free. Central Control has also recently introduced a new RCT media experiment design, Rolling Thunder, which is simple to implement and can be used for many types of media by many types of advertisers.

Running a good experiment is certainly far easier than building a complex statistical model to try to explain what is driving the best ROI in the mix – an approach many advertisers spend much time and money on.

Another legitimate criticism of experiments is that it is hard to generalize what works in the mix from a single test. True enough. Experiments offer a snapshot of the effect of one campaign, at a point in time, in select media, with a given creative, a particular product offer, specific campaign targeting, and so forth. But, which of those factors mattered most to driving that lift?

In general, to assess that, you need to run more experiments.

According to the “hierarchy of evidence,” the ranking of different methods for measuring causal effect (from which comes the idea that RCT is the “gold standard”), the only practice that regularly outranks an RCT is a meta-analysis of the results from lots of RCT studies.

Think of a large set of benchmarks of advertising experiments, scored by the various factors within the control of advertisers, such as ad format type, publisher partners, media channels, and so on. Such a system of analysis is easily within reach of any large advertiser (or publisher or agency) that routinely executes lots of experiments. For marketers working in a Bayesian framework, the lifts measured by lots of similar experiments become ideal “priors” for estimating effect the effect of future campaigns.

The shrewdest advertisers are increasingly adopting this practice, dubbed “always-on experiments.”

The Best Models Are Wrong, But Useful: Experiments Make Them Better

But even such an RCT benchmark doesn’t take the place of a good model. As they say, all models are wrong, but some are useful. Models are good for the big picture, zooming in and out to different degrees of granularity about how the mix is understood to work. They provide forecasting and scenario-planning capabilities, cost/benefit trade-off planning, simple summaries for strategic planning and other merits that won’t be supplanted by practicing regular ROI experiments.

Regular experiments are, however, a critical missing factor in the ROI analysis for too many advertisers. Experiments enable analysts to make the models better by recalibrating assumptions in their models with better evidence.

That is what I mean by “Models Plus Experiments”: honing coefficients in MMM and MTA models through the practice of always-on experimentation. (And, when I say “experiments,” I specifically mean RCT.)