Resources

The following are articles and academic papers we recommend that marketers and their colleagues read to better understand the “incrementality” movement of marketers using better quality randomized experiments for improving their advertising ROI.


Broadcast & Cable (NextTV): “A Scientific Approach to TV Ad Measurement”

Gaurav Shirole, VP of Ad Measurement at Roku, shares some great advice on how advertisers can be more disciplined in running randomized experiments for TV and digital video, including this: “Start with the scientific method: develop a specific hypothesis to test. Ad measurement works best when marketers are solving a specific business problem.”


ANA Genius Award (Short Video): “Sam’s Club: Excellence in Marketing Analytics”

The Association of National Advertisers (ANA) honors Sam’s Club with a Genius Award, recognizing the excellence in their programs of randomized controlled trials and uplift modeling for maximizing the incremental return on marketing dollars. The three-minute video is a great case study on incrementality measurement best practices. Tony Rogers, Chief Member Officer at Sam's Club, states clearly in the video the value of those programs: "It's allowed us to have a lot more certainty and a lot more precision around ROAS (return on ad spend) and renewal rates of our members."


MediaPost: “Why The Most Important New Industry Acronym May Be RCT”

By Joe Mandese, editor in chief, reporting on the announcement of Central Control’s project with the Advertising Research Foundation and other partners, “RCT21,” writes: “The method, known as “randomized control testing” — or what will likely become the acronym du jour on Madison Avenue for the next several years, RCT — goes beyond classic marketing and/or media-mix modeling and so-called attribution systems, to scientifically correlate and measure the effect of actual advertising.”


AdWeek: “The Current State of Digital Measurement by Attribution Is Flawed”

By Nathan Woodman, founder of Proof, formerly Chief Data Officer of Havas, laying out how inherent biases in popular observational methods of analytics do not serve advertisers’ interests well.


Think with Google: “Measuring Effectiveness - Three Grand Challenges”

Excellent business paper, 49 pages, the first “grand challenge” of which is “Incrementality: Proving cause and effect.”


The Correspondent: “The new dot com bubble is here: it’s called online advertising”

A harsh, widely read article, shining a light on inherent flaws in how most advertisers measure the effect of their digital advertising investments.


Kellogg School of Management, Northwestern University: “Is Your Digital-Advertising Campaign Working? If you are not running a randomized controlled experiment, you probably don’t know”

A great business article summarizing research Kellogg did with Facebook (see below in Academic sources) demonstrating why “quasi-experiment” methods, with “synthetic controls,” along with other types of sophisticated “observational” statistical inference techniques were incomparably worse than randomized controlled trials (RCT) in reliably estimating “lift” from advertising campaigns.


IAG Insurance: “AI, Machine Learning and Attribution: Practical Application in Marketing at IAG”

Presentation focused on benefits of uplift modeling and experiments compared to more popular but flawed practices of propensity modeling and observational data techniques (AI, ML, Attribution). By Willem Paling, PhD, Director, Customer & Growth Analytics at insurance giant IAG


Wikipedia: “Uplift Modeling”

Uplift modeling is arguably the most important strategy of advertising targeting, yet, as the IAG presentation above notes, it is under-employed by most advertisers. It posits that campaigns should strive to exclude three out of four segments of users -- Sure Things (brand loyalists), Lost Causes (loyalists of competitors or non-category shoppers) and Do-Not-Disturbs (who only react negatively to your ads) -- and focus instead on The Persuadables (those who are most subject to buy due to the advertising).


Academic paper: “Ghost Ads: Improving the Economics of Measuring Online Ad Effectiveness”

By Garrett A. Johnson, Randall A. Lewis & Elmar I. Nubbemeyer (2017), this is the original “ghost ads” paper that has since influenced many digital media companies to adopt similar techniques for running randomized experiments for advertising 



Academic paper, Kellogg School of Management, Northwestern University: “A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook” (2019)