How dangerous is your risk aversion?
Most big advertisers’ idea of best practice for the development of ad campaigns includes a fairly rigorous quantitative test before they put an ad on the air. Often the test results involve prescriptive changes and retests until a ‘pass’ is declared.
But the effectiveness and usefulness of quantitative research doesn’t rely on a moderator the way qualitative testing does, it relies more on the specific methodology of the research company – and this is where the danger lies.
The differences between methodologies are vast insofar as some are designed based on a home-grown set of benchmarks of what makes an effective ad, and others are based on how human beings react to stimulus per se, and whether or not it makes them buy or do the things marketers want them to do.
One would think that the self-referential methodology would quickly become obsolete, but not so. And, somewhat perversely, this stands to reason:
If a process of selection manages to weed out both the ineffective and the very effective and only approves the moderately effective ads, then our expectation for good becomes moderate, too. Unless ads that fail are developed and aired anyway, we don’t have an alternative history to see if the methodology was effective at avoiding risk or indeed if they were in fact costly by diminishing potential ROI. And few advertisers want to take that gamble.
But, funnily enough, some have – and they’ve won.
Take the now legendary Cadbury Gorilla for example; the ad that was the inspiration for the title of my book. It broke a whole much of ‘rules’ about what a TV commercial should look like. No demo sequence. No product shot. No pouring of the milk. Plus, the brand was not introduced at the highest point of drama. If Phil Rumbol, the marketing director for Cadbury at the time, had given up on the ad based on its disastrous quantitative test results, the gorilla would not be the icon for Cadbury that it still is today, ten years on. And this isn’t an isolated case either. Phil had previously gambled on the Jean de Florette pastiche which launched Stella Artois’s “Reassuringly expensive” campaign – another resounding success and a quant test disaster.
By way of a parallel, people in medicine have terms for when screening doesn’t work – false positives and false negatives. A false positive will tell you that you have a disease when you don’t, and a false negative will tell you that you don’t when you do. But they mostly employ strategies to catch these wrong diagnoses. Certainly, for a positive result of a serious disease they immediately test again. If a screening test for a disease is 99% accurate it still means that of 100,000 people tested, 1,000 people would have been misdiagnosed. (Alarmingly, 10 people would still think they didn’t have long to live after the second test.)
OK, so maybe that is the cost of the greater good of the 99%, and as I said, they’re retested anyway. But with ads, unlike people, they’re usually either abandoned or significantly modified. So we don’t know for certain if they would have been good or even great ads that could have been very profitable to the advertiser.
But we have to ask: how many other gorillas have been killed, albeit with good intention?
The challenge for the advertiser is compounded by the fact that many of the ads that fail quant tests but prove to be successful nonetheless, are pretty unconventional. Much like the Cadbury Gorilla, they don’t follow this well-worn format:
Establishing shots – 3 seconds
Story of problem the brand will solve – 7 seconds
Introduction of the brand and problem resolution – 5 seconds
Demo sequence to explain how the product works – 5 seconds
Dramatization of the benefit in the resolved problem – 5 seconds
Pack shot/offer/call to action. – 5 seconds
It’s a case of doing only what we know works according to the methodology that we use – the cart before the horse. The problem is, that little originality can be derived from formula, so it’s a little like swimming in welly boots if you want your advertising to cut through. The greater downside of such rigid formulae is that agencies will write ads that will ‘pass’ rather than ads that may better impact and persuade the consumer. It becomes self-fulfilling.
One disruptor in this market place is Brainjuicer, (to be rebranded System1 from April 2017). The System1 methodology looks for three attributes in ads: Fame, Feeling and Fluency – where Fame is how much a brand comes to mind, Feeling is positive emotion and Fluency is ease of brand recognition. By assessing a broad set of advertising the System1 guys claim they can accurately predict whether a brand will actually grow from the advertising they run by assessing the ads on a five-point scale, growth coming from those scoring more than three. (Of course they need to know your SOV, too.) But, interestingly they’ve tested some great ads that proved to be howlers and they’ve predicted some unlikely winners. They even predicted Trump’s win. But System1 hasn’t just assessed the ads/presidential candidates, it’s accurately predicted the results.
There may be others out there doing this, and I’m not necessarily trying to advocate one particular research company here. But my point is that if your competitors are knocking chunks out of your market share with better advertising than you, your processes for avoiding risk may really be working against you. Maybe you need to change your methodology to fit the advertising you need rather than change the advertising to fit your methodology?
Author and Founder – How to Buy a Gorilla.
Pre-order here: https://goo.gl/okw9Ch
The company: www.htbag.co.uk
The book: www.howtobuyagorilla.com