Incrementality (aka “lift”, “causal impact”, or just “actual impact”) is simply about going beyond the limitations of modeling methods like attribution by using scientific experimentation to measure the true impact of marketing investments, as accurately as possible.
The term comes from the idea of measuring only the 'incremental' impact driven by each marketing channel or campaign, or the impact (purchases/revenue, leads, site visits, etc) that would not have happened without those investments. This has become necessary specifically to address the increasing inaccuracies of attribution modeling, which too often over-credits individual channels and campaigns (and in other cases severely under-credits them) in a way that doesn't accurately represent their true business impact.
Running an incrementality experiment means creating treatment and control groups, usually through randomization, much like the randomized controlled trials (RCT) used to establish safety and efficacy of new drugs and treatments in the medical field.
In these cases only the treatment group receives the ‘intervention’ (either a new channel/campaign/investment or the removal of an existing channel/campaign/investment), and the difference in overall, unattributed downstream outcomes (purchases/revenue, leads, site visits, etc) between the two groups shows the true, incremental impact of the intervention.
By using randomization to ensure statistical equivalency between the two experiment groups and comparing total, unattributed conversions among the treatment group to the same from the control group, these experiments provide the most accurate method for determining how much impact was truly caused by the campaigns/investment (the experiment ‘intervention’). This is what makes incrementality testing the true ‘gold standard’ for accurately measuring the impact/ROI of marketing investments.
However, running such experiments can require significant time and effort and come with additional costs to the business, and their results are high-level representing the overall impact of one investment or set of campaigns over a specific period of time. That’s why a combination of multiple measurement methods, incrementality and attribution at least, is absolutely crucial to effective performance marketing.
User-based experiments: In some cases these experiments can be run using individual users as the ‘experimental units’, randomly placing each user into either the treatment or the control group and suppressing one of those groups of users from receiving the campaigns’ ads/messages.
This parallels most closely with the RCTs used in healthcare and pharmaceutical testing, and is often ideal when it is possible. But the same degradation of user-level tracking that has impacted audience targeting capabilities has also made user-based experimentation much more challenging, so in many cases this just isn't an option.
Geo-based experiments: In cases where splitting and tracking users isn’t possible, which are quite common these days especially for paid media / ad campaigns, an alternative approach is geo-based experimentation. This involves using ‘geos’, or geographic regions, as the ‘experimental units’ in place of individual people/users.
In the United States the best geos tend to be Nielsen’s Designated Market Areas (DMAs), which have been optimized specifically for such forms of measurement. In these experiments, entire DMAs are either included in or excluded from the campaigns being tested, based on whether each DMA was placed into the treatment or the control group. Linear regression models are used to enable comparison of what actually happened in the treatment group to what would have happened (the ‘counterfactual’) in that same group of DMAs without the ‘intervention’ having taken place (i.e. without the campaigns having been run).
Causal impact analyses: In some cases true randomized controlled experiments may not be possible, or at least may not be warranted, and instead incrementality-focused (or ‘causal impact’) analyses may suffice.
Similar to geo-based experiments, this involves using linear regression on time series data (or ‘time-based regression’) to compare actual, unattributed performance for the primary KPI during a test period to a counterfactual informed by a control group. However, in this case the control group is defined by a combination of other time series datasets which can be shown to accurately predict the values of the primary KPI during the pre-test period (via the time-based regression model) but which are not themselves impacted by the ‘intervention’ (the change to marketing campaigns being measured, during the test period).
Because of the major limitations put on user-level tracking and third-party advertising cookies over recent years, and the impact that has had on attribution modeling, incrementality testing is essential to maximizing ROI on marketing investments.
For any company spending a significant amount on digital marketing, the value of calibrating attribution models through periodic incrementality experiments far outweighs the costs and effort involved, especially for the largest channels and campaigns. But unfortunately there is significant effort involved..
Marketers must thoughtfully plan every aspect of each test including the primary and secondary KPIs to measure, the testing methodology to use, the scope of campaigns to include, the timeframe during which to test, and what they plan to do in different results scenarios. Then they have to prioritize among all the tests they would ideally like to run and build long-term testing calendars in order to maximize learnings and impact from testing.
Incrementality testing, and using the results of tests to calibrate attribution models, is a new practice for many organizations and can be challenging to adopt and incorporate initially. But with good advice and a logical, data-driven approach an incrementality-calibrated measurement strategy has the potential to significantly increase ROI on marketing investments.