This page is a curated library of essays, research papers, case studies, and industry analysis on advertising experiments and measurement. It highlights work from academics, practitioners, and platforms that advances evidence-based marketing and rigorous incrementality testing.
UCSD: “Advertising Measurement”
Professor Kenneth Wilbur’s 90+ page slide presentation for his graduate marketing course should be required reading for all marketers and marketing analysts.
Thomas Leonard: “Incrementality testing is no longer a niche concept”
Consultant Thomas Leonard counsels skepticism at letting Google and Meta measure incrementality for you with their on-platform tools in this thoughtful LinkedIn post. Link
GreyMatter Unloaded: “The Evolution of Experiments”
Marc Ryan, former product and research leader at Nielsen, InsightExpress, Kantar and YouGov, breaks down how randomized versus quasi‑experimental designs impact advertising incrementality measurement in his Substack GreyMatter Unloaded. Link
Mi3 Market Voice: “Why failing media experiments can be the best thing you do this year”
Tom Sheppard, an executive at Australian agency Atomic 212°, explains why experiments that fail to find lift may offer your most valuable learning in this essay. Link
The Drum: “Digital ads are absolutely f***ing awful, and what to do about it”
Tom Goodwin argues that digital advertising has become dysfunctional—low-quality, intrusive, overly automated, and creatively hollow—despite offering the best canvas the industry has ever had. His call to “Make Advertising Great Again” urges a return to long-term brand building, better craft, sound judgment, and a rejection of short-term metrics that have steered the industry off course. Link
SSRN: “Weaponized Opacity: Self-Preferencing in Digital Audience Measurement”
A new paper by German lawyers Thomas Hoppner and Philipp Westerhoff argues that independent audience measurement is essential for healthy media markets, yet major platforms like Google and Meta continue to resist it. Their reliance on opaque, self-referential measurement tools leads to misattributed performance, overspending, and distorted competition across the advertising ecosystem. Link
LinkedIn: “On the persistent mischaracterization of Google and Facebook A/B tests”
A new study questions the reliability of Google and Meta’s internal A/B and “conversion lift” tests, though critics point out the authors conflate biased click-based A/B tests with the more rigorous ghost-ads method. Even so, ghost-ads now suffers from lower match rates under Apple’s ATT, making small lifts harder to detect—reinforcing why major advertisers should keep their measurement independent from the platforms selling the media. Link
Northwestern Kellogg: "Incrementality by Experimentation (PIE) for Ad Measurement"
A new paper by researchers Brett R. Gordon, Robert Moakler, Florian Zettelmeyer shows how using even a small number of advertising experiments gives a better basis for projecting lift and ROI than using models alone. Link
Wikipedia: “Uplift Modeling”
Uplift modeling is arguably the most important strategy of advertising targeting, yet it is under-employed by most advertisers. It posits that campaigns should strive to exclude three out of four segments of users—Sure Things (brand loyalists), Lost Causes (loyalists of competitors or non-category shoppers) and Do-Not-Disturbs (who only react negatively to your ads)—and focus instead on The Persuadables (those who are most subject to buy due to the advertising). Link
HBR: “A New Gold Standard for Digital Ad Measurement?”
Researchers Julian Runge, Harpreet Patter and Igor Skokan explain how Marketing Mix Modeling (MMM) is resurgent in the era of declining ad tracking and how regular experiments are an essential best practice of modern MMM. Link