Blog.
How to measure the true impact of in-store media

 

From proving success to predicting returns, measurement matters. In retail media, just like any other advertising discipline, gauging a campaign’s effectiveness is a critical capability.

Measurement is about much more than just pure performance evaluation, of course. Equipped with the right insights, brands and agencies can also use measurement data to optimise future campaigns, make smarter and better informed decisions about audiences, and allocate budget more effectively. More than anything, measurement can provide certainty about the commercial returns that a campaign delivered.

One of the many advantages of digital channels is that they typically provide a direct (or at least very clear) link between ad exposure and product purchase. Online media is inherently process-driven, after all; if a customer adds a product to their basket after clicking on a display banner or recommended product, it isn’t hard to join the dots between those two actions.

The store environment, on the other hand, is very different. Here, we don’t have the benefit of clickstream data to tell us what actions someone took after seeing, hearing, or interacting with a piece of media. It may be possible to look at sales during a campaign’s active period and infer that they were influenced by media, but that doesn’t provide the level of confidence that marketing teams usually need today.

Because of that, we need to approach measurement in a slightly different way – and that’s where the concept of “test and control” comes in.

Who did your campaign reach… and what did they do?

Test and control is a measurement methodology designed to help advertisers understand how the presence of store media influenced customer behaviours. It is also one that can be applied to every media product in the Tesco Media & Insight Platform’s Connected Store proposition.

At the heart of this approach is a process of segmentation. When a campaign is being planned, Tesco stores are separated into one of two groups: the eponymous “test” and “control” stores:

  • Test stores run the selected campaign creative, be that via Point-of-Sale, Connected Display, In-Store Radio, or any other media product. Test stores show us what happens when shoppers are exposed to the ad or campaign in question.
  • Control stores don’t run that creative. While they do still feature other campaigns, they don’t carry the creative that we’re interested in assessing, meaning that we can see how customers behave when they’re not exposed to a campaign.

Naturally, creating an effective test and control system isn’t quite as simple as just dividing all stores into one of those two groups and hoping for the best. To ensure that comparisons between the two are accurate, certain parameters also need to be put in place. .

The first of these revolves around the nature of the stores themselves, where (relative) uniformity is key. If the test group was filled mainly with Tesco Superstores from the north of England and the control group with London-based Tesco Express stores, for example, that wouldn’t make for an effective comparison. To that end, the stores included in each of the two groups need to be functionally identical across a range of different factors.

The second consideration here is the need to identify which customers visited which type of stores and “tag” them appropriately. This is something that we’re able to do by virtue of the Tesco Clubcard, the data from which allows us to see who is shopping where, on what days, and at what times. That then enables us to say for sure which shoppers were exposed to an ad – and which weren’t.

Finally, to understand if – and how – behaviours have changed, you also need to be able to look at how customers were acting before a campaign went live. That means having access to retrospective sales data across both test and control stores alike. This can then be compared against sales made during (and after) the campaign period, providing a truly accurate reflection of store media’s impact.

An additional layer of confidence

Reporting against Connected Store campaigns comes in three varieties. Firstly, an End of Campaign report provides an immediate snapshot of performance one week after activity ceases. This is followed by detailed 3- and 12-Week Post Campaign reports that provide a clearer view of the medium-term results. In line with the final point above, these reports also analyse the featured product’s performance before the campaign got underway.

What we typically see in those reports is that, while test and control results are usually indistinguishable during the pre-campaign period, they soon begin to split apart once media has been activated. By looking at how those results deviate, and tracking the difference in unit sales, revenues, customer numbers, and more, we can then begin to determine the true effectiveness of a Connected Store campaign.

That isn’t all, however. As well as a general comparison between our two store groups, we also apply a technique known as significance testing. Here, we look at statistical significance within the results, which allows us to determine how likely it is that performance was driven specifically by the media and not by random chance. Essentially, this gives advertisers an additional level of certainty around the drivers of performance.

For many advertisers, sales uplift is the primary focus when it comes to measurement, with key performance indicators (KPIs) such as sales uplift and incremental return on advertising spend (ROAS) tending to be the most important. That said, our performance reports do provide insight into awareness and conversion-related factors – ensuring that marketers with a focus on different ends of the funnel can get the information they need too.

Connected Store is an essential aspect of the Tesco Media & Insight Platform, providing brands and agencies with access to more than three-quarters of British households. With this insightful and incisive approach to measurement, Connected Store also allows them to understand exactly how effective their campaigns are – bringing the same level of transparency and certainty that they get from digital media to the physical environment.