Lies, damn lies, and statistics – Measuring and listening to the Voice of your Customer

06 April, 2015 by Graham Cornell | Digital analyticsStrategyOmnichannel

The world loves statistics – it’s how we measure everything from Apple’s success to the declining number of Zebras. We like them because they are easy to understand and they help us make ‘informed’ decisions. They support our arguments.

A current favourite of mine is related to the annual ‘get fit for Summer’. “Did you know that refraining from drinking booze for a month reduces fat from your internal organs by 20%?" I didn’t know that and as someone who gives up booze for at least 1 month a year, this is great news! And hearing the same statistic from multiple people, it spurs me on despite having no evidence that it’s true.

This leads me to conclude that if I reduce my alcohol consumption by 50% for a year, I should be able to continue to lose 10% of my surplus fat, leading me to be super lean by… well, this time next year.

It sounds plausible, achievable, and best of all, it sounds like a plan. The fact that this doesn’t consider my diet, exercise, volume of consumption, or my starting physical condition doesn’t matter. It’s a statistic that I want to believe.

The point of my illustration is that we need to be careful of these false positives in the digital world – particularly when considering digital strategies related to the Voice of the Customer which doesn’t have the benefit of being directly linked to key business performance metrics such as revenue (that isn’t to say that they aren’t related).

Not wishing to single out a particular tool or methodology, I’m going to start with Net Promoter Scoring – mainly because it’s one of the most popular and simple methods.

“Over the last 12 months, we have scored an average NPS score of 12, putting us on par with leading brands in our sector and representing an increase of 100% in our customer satisfaction rating”

What a great statement? We’re winning… right? Our strategy works! Unfortunately not.

To illustrate my point, it’s important to note that NPS is a ‘moment in time’ score (usually monthly), this gives you a rating based on the percentage of promoters minus the percentage of detractors.

The ‘moment in time’ part is key because over an extended time period, the value can vary depending on your initiatives, business performance, and other factors e.g. the weather (seriously, it affects our mood).

If you are a retailer, in the run up to Christmas you might see a drop in satisfaction as you struggle to deal with increased volumes. But in the New Year, with a consumer friendly returns policy, you might get a strong rebound. Easter might again cause issues as the sales season returns but lets say sales drop off in the summer. The mean might work out in your favour but in finding this ‘good news’ you’ve over looked that your customers are most unhappy at your critical trading times.

That’s a missed opportunity and something that could be easy to spot if you’re willing to accept critical feedback.

There are also other metrics to consider:

  • Volume – Again, we work in percentages so 50% happy people in the summer (based on 10 responses) doesn’t really outweigh 10% satisfaction based on 10,000 responses.
  • Channel – Where were you testing? In store, online, mobile app, call centre… all of the above? Each channel might have a different experience so it’s important to understand where the challenges are.
  • Action – Linked to channel, what was the customer trying to do? Buy something, return something, get advice…? The response could be linked to the action.
  • Point of survey – Where in the journey were you surveying? Was it at the end of an interaction, randomly through a browsing experience, or upfront e.g. as soon as a customer hits your online channel.

This may seem obvious but we do see this statistical massaging in practice.

Measurements like NPS are valuable because they act as a universal benchmark and they are simple. For your customer, it’s a simple question: “Would you recommend us to a friend or colleague?” and a mark from 0 to 10 (10 being good). For you it’s reasonably easy to get comparable metrics from others in your sector. Best of all, it’s quite simple to implement.

What simple surveys don’t provide is the detail. For that, it is beneficial combining them with other metrics to provide the complete picture. These might include:

  • Qualitative Questions – opportunities for customers to explain the positive/negative experience that they’ve had.
  • Churn Rate – the number of customers who stop paying for a product or service within a given time period.
  • Sentiment Analysis – the substance behind social data through processing language and text to identify and understand consumer feelings and attitudes.
  • Customer Effort Score – the ease of customer interactions or “micro-experiences” and how that impacts on loyalty.
  • The Apostle Model – another way to segment customers.

Obviously, using too many of these can make the whole process burdensome and more importantly, metrics like churn rate do not apply to all sectors, but a combination of a select view can dramatically improve your view of the customer experience.

The final challenge is to understand whether it is worth spending time and money on problematic channels. To answer that, it will depend on your business model and an understanding of your Customer Acquisition Cost (CAC) and Life Time Value (LTV).

Some organisations primarily focus on ensuring that the acquisition process is as clear and enjoyable as possible, aiming to avoid too much after care and, consequently, spending less time and effort on that aspect of their business. Others see strong customer support as a cornerstone to their business model – their Unique Selling Point – and focus their energies here (this is particularly relevant in heavily saturated sectors or sectors where differentiation is difficult).

So in conclusion, the models might be simple but the devil is in the detail. If you want to listen to the Voice of your Customer, you must take care to truly listen and understand. If they are happy, why are they happy? How can you capitalise on that? If they are unhappy, why are they unhappy? Is it a product issue? A channel issues? A quality issue? Where is this most prevalent in your business and why?

For me, I figure that the tried and tested piece of advice that a "balanced diet and plenty of exercise" is the way forward… and probably avoiding too many of those “Are you coming to join us for a quick beer after work?” that turn out to be more than one.