Why Ad Testing Matters: The Science Behind Knowing What Works

Marketing
4 min. read

Elliot Britton / January 08, 2018

The advertising landscape has evolved dramatically in response to rapidly-changing consumer trends. Most tellingly, the digital revolution continues to spiral.

According to Salesforce, digital advertising is increasing by 20% year on year. Despite this, many brands are still struggling to understand their digital audiences.

This portrays a worrying state of affairs whereby companies are increasing their spend on digital advertising, without the insights needed to make it work.

The Need for Smarter Spend

With more investment moving to digital, all eyes and ears are naturally pointed to one question: “How do I make it more effective?”

Many brands are taking drastic measures to ensure less investment is lost.

P&G is one such brand who recently cut their digital ad spend by between $100m and $140m having learned, like many others, that they were still wasting huge amounts of money on ineffective ad campaigns. In a statement, P&G said the move was only temporary, but a logical one based on their findings.

“Clearly we don’t need to be spending money that is seen by a bot and not a person”, said CFO Jon Moeller. “We don’t need to be spending money on ads that are placed in inappropriate places, and that’s why you see a significant reduction.”

Moves like this have sparked much needed debate on the topic of expenditure on digital advertising, and the need for more effective measurement solutions that offer far more visibility into what works, and equally, what doesn’t.

Investing in the Test

Advertisers have long used creative pre-testing to optimize their ads before launching a campaign. This ensures less spend is lost when it comes to activation.

According to an Ipsos study, creative quality determines 75% of impact as measured by brand and ad recall. Smart brands are taking this on board and investing more in creative testing, which essentially tells them what works, before it works – and the proof is in the pudding.

In an interview with AdAge, Keith Weed, Unilever’s Chief Marketing and Communications Officer, states: “I’ve certainly got enough evidence, real hard evidence, showing that ads we’ve pretested perform better in the marketplace than ads we don’t. It’s inarguable proof.” Similarly, Clancy and Dyson reported in Admap in 2014 that creative has over four times more influence than media efficiency on profit impact.

And with more affordable and time-efficient pre-testing solutions now coming to the fore, the reasons for brands not to pre-test their ideas are few and far between, saving them huge amounts of spend in the long run.

Knowing What Works

But while the pre-test can offer some key insights into what might work for your audience, the post-test is where the real learnings come into play.

Through in-depth studies that quantify the effectiveness of your campaign, analyzing audience reactions and responses, you can:

  •      Evaluate the true impact of your campaign
  •      Measure your ROI accurately
  •      Identify the metrics that need more attention

Insight of this kind represents the holy grail when it comes to marketing – knowing precisely how every part of a campaign collectively drives sales, and what happens when you adjust them.

GWIQ™ Ad Effectiveness uses a control versus exposed method to quantify the impact of online advertising. Unlike other studies of this kind that rely solely on passively-derived data collected through website analytics, GWIQ does something different.

Leveraging the world’s largest panel of digital consumers run by GlobalWebIndex, exposure to your campaigns is measured without the need for recall or recognition questions.

Those who have seen the ad (exposed) and those who have not (control) are both sent an identical, bespoke survey, framed around the campaign objectives and the brand metrics you want to measure. The difference in opinion between these two groups is essentially what quantifies the impact of your campaign.

This process is carefully designed to ensure a representative audience of your campaign is surveyed and all key metrics considered. This is done by matching your target audience (right down the the minute details) with real panelists to ensure the tested subjects are an exact replica of the audience you’re targeting.

The GWIQ Process 

    1. Analyze campaign objectives
    2. Add GWIQ tag to campaign to track exposure
    3. Create bespoke survey around objectives and brand metrics
    4. Survey exposed group
    5. Match exposed group to unexposed panelists
    6. Survey control panelists
    7. Compare responses of the two groups
    8. Analyze the difference between responses to quantify the impact
    9. Produce report on effectiveness of advertising

Beyond Behavioral Analytics

For today’s brands to stay ahead of the curve and improve the quality of their digital campaigns, there’s an underpinning need to move beyond the use of behavioral analytics and vanity metrics alone towards a more holistic and accurate solution.

This means incorporating trusted survey data given to you directly from the people you’re targeting to understand the motivations behind their actions.

By combining this with analytics, matching perceptions with behaviors, you can fill in the blanks and ensure you don’t miss a beat.

With a panel as wide and far-reaching as that within GlobalWebIndex working together with a solution like GWIQ there’s simply no longer any reason for questions to be left unanswered.

Questions to Consider

  1. Are my ads reaching the right people?
  2. Are they having the desired effect on my target audience?
  3. Are they shifting perceptions in the right direction?
  4. Are they speaking the right language?
  5. Are they focused on the right objectives?
  6. Are they guiding customers along the path to purchase?
  7. Are they optimized for the right stage of the customer journey?
  8. Which aspects of the ad are proving effective and which are ineffective?