Numbers don't lie--or do they?
Sometimes they can be misleading little rascals.
There are no shades of meaning with numbers. A 2 means 2, and that's it. Clear. Concrete. Not open to discussion or interpretation like "That depends on what the definition of 2 is." Two is this many: one, two.
So why can numbers be so misleading?
Sometimes, the numbers are irrelevant:
For decades companies measured every aspect of customer satisfaction. Image for advanced technology? Check. Friendly salespeople? Check. Like the color? Check. The only problem was these numbers don't happen to correlate to business success. One number – and apparently only one – does: the net promoter score.
A net promoter score is simply the percentage of people rating a company 1 through 6 subtracted from the percentage rating it 9 or 10 in response to the question: "On a scale of 1 to 10 how likely are you to recommend this company to a friend or associate?" The only other question that matters is the follow-up: "Why?"
Some creative research measures ads' and commercials' likeability, although there would not seem to be a correlation between how much a commercial is liked and how effective it is at generating the intended behavior in the target audience. So why measure likeability? More to the point: Why make decisions based upon a factor which doesn't impact results?
A number used to measure one factor may be inappropriately applied to another:
Customer satisfaction scores – whether net promoter or any other type – measure customer satisfaction. Period. They do not measure what motivated customers to pick the company in the first place. A lot of research has shown that the factors that determine selection are often very different from those which determine satisfaction.
Marketers only see customers during the transaction, so they're inclined to overemphasize the importance of satisfaction drivers. But those marketers aren't able to observe while prospects are making their purchase decisions. So they're susceptible to confusing the factors that generate satisfaction with the ones that generate sales. When they advertise satisfaction drivers rather than the sales drivers, the results are often disappointing.
Another problem with focusing on satisfaction drivers is that that businesses only see the people who decided to buy from them, not all the ones who went elsewhere. So large areas of opportunity may go unrecognized.
There's occasionally confusion about what the numbers mean:
A recent study showed that only 2% of cars are sold on line. Quite true. But more than ten times that many are sold in dealerships when a customer walks in with a check in the amount previously negotiated on line. And many more sales are begun with on-line research into make, model, price and discounts. So the dealer who naively believes that only 2% of purchases come from the internet is competing at a serious disadvantage to those who know what the numbers really mean.
Often, a crucial number is missing:
A new television campaign has achieved a significant jump in awareness. A test panel found the commercial effective at communicating the intended benefit. (That can be determined in focus groups and one-on-one research.) The media plan delivers strong sustaining-weight reach and frequency. And sales are flat. Are the data flawed? Is the campaign a bust? Or did three new competitors come into the market with launch-weight reach and frequency? If so, the new campaign's share of voice is a small fraction of the company's former portion of media exposure in the category. If that's the case, just holding sales flat is a big win.
On a micro scale, we know an automotive dealership with a third of a competitor's media spending which has been duped into believing media weights are the same by the competitor's "friendly" data sharing. The dealer has never attempted a competitive media analysis, so he wonders why his sales are a fraction of his "friend's," and switches from agency to agency looking for a magic solution to a simple – but unrecognized – problem.
Numbers may not lie, but people sometimes do:
Ask an average group of people if advertising influences their purchase decisions and the answer is a resounding and unanimous "No!" With that sort of response, you'd expect the entire industry to close up shop immediately. Watch how those people behave, however, and it's a very different story.
The new – and excellent – book, What Sticks, makes the point that the only accurate way to measure advertising's impact on purchase decisions is by observation of the behavior. Decades earlier Rosser Reeves devised the Usage Pull methodology which measured open-ended purchase intent responses of people who had and had not been exposed to a brand's advertising. Both valid. But asking "Would that ad convince you to buy?" isn't.
And, of course, there's always misdirection:
Magicians use flamboyant gestures with one hand to divert attention from what the other hand is doing. Matadors dupe bulls into charging a cape rather than the person wielding it. It's called misdirection, and it's all too common in marketing communications.
Media salespeople, agencies and even internal teams sometimes say, in effect "Look over here!" to direct attention away from what really matters. Like the media rep who proves conclusively that her or his station is #1 in the market, while conveniently omitting the fact that it doesn't reach your company's target demographic. Or the agency which trumpets a commercial's Advertising Age "most liked" ratings while ignoring dismal awareness and preference numbers. Or even the sales manager whose PowerPoint focuses on increased sales while side-stepping the fact that all of those sales were made by offering such deep discounts that the company lost money on each and every one.
We love numbers:
Numbers are the heart and soul of marketing. At BrainPosse we love the little rascals. Everything we do is focused on our favorite: ROI. Like anything someone loves, numbers deserve understanding, respect and to be treated right.
For decades companies measured every aspect of customer satisfaction. Image for advanced technology? Check. Friendly salespeople? Check. Like the color? Check. The only problem was these numbers don't happen to correlate to business success. One number – and apparently only one – does: the net promoter score.
A net promoter score is simply the percentage of people rating a company 1 through 6 subtracted from the percentage rating it 9 or 10 in response to the question: "On a scale of 1 to 10 how likely are you to recommend this company to a friend or associate?" The only other question that matters is the follow-up: "Why?"
Some creative research measures ads' and commercials' likeability, although there would not seem to be a correlation between how much a commercial is liked and how effective it is at generating the intended behavior in the target audience. So why measure likeability? More to the point: Why make decisions based upon a factor which doesn't impact results?
A number used to measure one factor may be inappropriately applied to another:
Customer satisfaction scores – whether net promoter or any other type – measure customer satisfaction. Period. They do not measure what motivated customers to pick the company in the first place. A lot of research has shown that the factors that determine selection are often very different from those which determine satisfaction.
Marketers only see customers during the transaction, so they're inclined to overemphasize the importance of satisfaction drivers. But those marketers aren't able to observe while prospects are making their purchase decisions. So they're susceptible to confusing the factors that generate satisfaction with the ones that generate sales. When they advertise satisfaction drivers rather than the sales drivers, the results are often disappointing.
Another problem with focusing on satisfaction drivers is that that businesses only see the people who decided to buy from them, not all the ones who went elsewhere. So large areas of opportunity may go unrecognized.
There's occasionally confusion about what the numbers mean:
A recent study showed that only 2% of cars are sold on line. Quite true. But more than ten times that many are sold in dealerships when a customer walks in with a check in the amount previously negotiated on line. And many more sales are begun with on-line research into make, model, price and discounts. So the dealer who naively believes that only 2% of purchases come from the internet is competing at a serious disadvantage to those who know what the numbers really mean.
Often, a crucial number is missing:
A new television campaign has achieved a significant jump in awareness. A test panel found the commercial effective at communicating the intended benefit. (That can be determined in focus groups and one-on-one research.) The media plan delivers strong sustaining-weight reach and frequency. And sales are flat. Are the data flawed? Is the campaign a bust? Or did three new competitors come into the market with launch-weight reach and frequency? If so, the new campaign's share of voice is a small fraction of the company's former portion of media exposure in the category. If that's the case, just holding sales flat is a big win.
On a micro scale, we know an automotive dealership with a third of a competitor's media spending which has been duped into believing media weights are the same by the competitor's "friendly" data sharing. The dealer has never attempted a competitive media analysis, so he wonders why his sales are a fraction of his "friend's," and switches from agency to agency looking for a magic solution to a simple – but unrecognized – problem.
Numbers may not lie, but people sometimes do:
Ask an average group of people if advertising influences their purchase decisions and the answer is a resounding and unanimous "No!" With that sort of response, you'd expect the entire industry to close up shop immediately. Watch how those people behave, however, and it's a very different story.
The new – and excellent – book, What Sticks, makes the point that the only accurate way to measure advertising's impact on purchase decisions is by observation of the behavior. Decades earlier Rosser Reeves devised the Usage Pull methodology which measured open-ended purchase intent responses of people who had and had not been exposed to a brand's advertising. Both valid. But asking "Would that ad convince you to buy?" isn't.
And, of course, there's always misdirection:
Magicians use flamboyant gestures with one hand to divert attention from what the other hand is doing. Matadors dupe bulls into charging a cape rather than the person wielding it. It's called misdirection, and it's all too common in marketing communications.
Media salespeople, agencies and even internal teams sometimes say, in effect "Look over here!" to direct attention away from what really matters. Like the media rep who proves conclusively that her or his station is #1 in the market, while conveniently omitting the fact that it doesn't reach your company's target demographic. Or the agency which trumpets a commercial's Advertising Age "most liked" ratings while ignoring dismal awareness and preference numbers. Or even the sales manager whose PowerPoint focuses on increased sales while side-stepping the fact that all of those sales were made by offering such deep discounts that the company lost money on each and every one.
We love numbers:
Numbers are the heart and soul of marketing. At BrainPosse we love the little rascals. Everything we do is focused on our favorite: ROI. Like anything someone loves, numbers deserve understanding, respect and to be treated right.
Alas, measuring customer satisfaction isn't a one-size-fits all proposition. And as you suggest, such measures are only useful if they give the company something they can actually use. NPS is great--but the research design needs to also determine what is driving that willingness to recommend. And willingness to recommend does not apply in all industries as the best metric. Pardon the plug, but I blogged about this yesterday: http://www.researchrockstar.com/nps-is-not-the-de-facto-metric-for-telecomm-customer-satisfaction/
Anonymous says
Anonymous says
What a great resource!
Anonymous says