New to site? Create an Account


Lost password? (close)

9 Measure incrementality

All the fast growing online businesses out there measure incrementality. They learn fast what works well, and what works better. As mentioned before, many (online) marketing platforms, publishers or tools overstate the importance of the role that media played in driving your KPIs. This is usually because they measure correlation and not causation. Many of the attribution solutions do not have base line built in (what if I stopped advertising altogether). Nor do any provide an easy framework to test causation.

Keen to know what your marketing investment is actually driving? Really keen? You will need proper test and control groups and execute tests on a constant basis. Let me talk you through one easy setup, and one hard one.

Incrementality testing on RLSA and or Remarketing

Google Analytics randomly assigns each of your users to one of 100 buckets. For any given user, the User Bucket dimension (values 1 to 100) indicates the bucket to which the user has been assigned. By including a User Bucket condition in audience definitions, you can create multiple audiences that are identical in composition except for their User Bucket values. You can then compare the effects of different campaigns on identical audiences.

For more on best practices to set up Google Analytics audiences please go back to our Google Analytics audience chapter here.

For example, you might create two remarketing audiences that share the same Age, Gender, and City, but are differentiated by User Bucket. You can then run a different version of your remarketing campaign for each audience to see which version is most effective. Or run nothing on part a specific percentage of the User Bucket to find the base line of users that would come back and convert even without being remarketed to.

Just keep in mind that the User Bucket is cookie based and not user based. You might want to combine these random user list techniques with the Adwords remarketing tag. This will have a way cleaner test with a lot less overlap between test and control groups.

Famous Ebay Paid Search Incrementality tests

The most famous examples of testing incrementality are probably the researches done by eBay. There are two major ones that have been publicised and discussed widely. The first one, claiming that Brand Search “did not work” and that “generic search value was mostly in new user acquisition”, has been widely critiqued. Here is an interesting quote from that report:

This paper reports the results from a series of controlled experiments conducted at eBay Inc., where large-scale SEM campaigns were randomly executed across the U.S. Our contributions can be summarized by two main findings. First, we argue that conventional methods used to measure the causal (incremental) impact of SEM vastly overstate its effect. Our experiments show that the effectiveness of SEM is small for a well-known company like eBay and that the channel has been ineffective on average. Second, we find a detectable positive impact of SEM on new user acquisition and on influencing purchases by infrequent users. This supports the informative view of advertising and implies that targeting uninformed users is a critical factor for successful advertising.

Ebay very recently published another Search advertising research. Here is a small recap of that research:

Paid search, also known as Search Engine Marketing (SEM), allows advertisers to target users of a search engine with relevant ads. It is broadly adopted by advertisers due to its superior capability to drive users, traffic, and conversion compared to other marketing channels. It also provides direct consumer behavior metrics such as ad impressions, clicks, website visits, and subsequent conversions for advertisers. However, the true effectiveness of paid search has been hard to measure, as the sales led directly by paid search ads might lack causal effects.

What are the incremental sales or user acquisitions truly driven by paid search campaigns?

The answer typically involves detecting small signals out of large noisy consumer behavioral data in a controlled experiment.

At eBay, we’ve continued to test and learn. In this latest study, we have developed a hybrid Geo+User experiment approach, and conducted the first-ever long-running field test over one and half year to measure the incremental impact by Google paid search campaigns on one of the largest e-commerce platforms: the U.S. eBay marketplace. Among our findings:

  1. Paid search drives statistically significant sales lift to the U. S. eBay marketplace.
  2. Paid search is also an important source for acquiring new users. The user acquisition lift is higher than the immediate sales lift from paid search campaigns.
  3. The long-running test reveals a strong seasonality trend in paid search effectiveness. Paid search campaigns made the biggest difference on sales during holidays.
  4. It appears that ad spends need to reach a certain threshold to enable the overall effectiveness of paid search. However, further increasing spends beyond that threshold increases cannibalization of other marketing channels or organic sources. Advertisers are encouraged to run experiments regularly to guide marketing investments.
  5. Natural search performance experienced nearly double-digit gains when paid search was turned off completely. Nevertheless, natural search cannot fully substitute for paid search traffic. Natural search and paid search shall work in a complementary way to enhance brand presence and promote customer conversion.

Published at the 2016 ACM Conference on Economics and Computation (EC’16), the full paper can be accessed here. It is a fantastic example of how to measure incrementality (if you have the resources that Ebay does).

Share Button