Today we live in a world where customer expectations are higher than ever. Next day deliveries or even same day deliveries are becoming the norm rather than the exception. Stories of bad experiences can go viral on social media in a matter of minutes. Multiple social media channels, email, chat rooms and call centres provide customers with many ways to communication with a brand and the immediate nature of communications in the modern world creates the expectation that queries can be swiftly dealt with.
For many years now customer experience professionals have made use of customer satisfaction and experience surveys to help them understand their customers. When tracked over time, they can provide useful insight into what is going well and what could be done better. But with things changing so fast, how can we ensure these surveys remain relevant and continue to provide useful feedback?
Here are some of my observations and tips for managing customer satisfaction surveys:
Be aware of customer feedback from all sources
Customer needs can change fast. New technology can create new expectations that rapidly alter customer perceptions of how well or badly a brand is doing.
It is important to ensure that your customer satisfaction survey measures what is important to customers today – not what was important yesterday. If the market changes, that might mean we ought to be measuring something new or measuring something in a different way.
We can pick up early signs that change might be necessary from a variety of different sources both internal and external to the brand. Internally, anecdotal feedback from salespeople, accounts people or service engineers are all potentially important. So too are customer comments on social media – even in the unfortunate event when they present a significant PR problem in the short run, they also provide an opportunity for the business to learn something useful.
Finally, many customer surveys include open comments questions, and it is always useful to keep an eye on these for anything important that the business might not be aware of.
The challenge is to gather all this disparate, unstructured, and sometimes entirely anecdotal information together and pick out what’s important. It is a good idea to set aside some time for a more comprehensive review, where all these different inputs can be collated together and considered, at regular pre-planned times.
Understanding what customer feedback is really telling us
Imagine if a service engineer was late for an appointment and the customer articulates their frustration on Facebook by saying “The engineer was three hours late!” Or perhaps in another case we hear that “the engineer missed the appointment.”
On the surface of it the problem seems obvious. The engineer was late, or the appointment was missed. You review your customer experience questionnaire and sure enough there are questions that appear to cover such things as late or missed appointments. So, the survey can monitor any issues in these areas as things stand. That seems clear enough – but is it the full story?
Would these comments have been different if the engineer had called in advance to say they were running late? And did anyone contact the other customer in advance to let them know the appointment was cancelled? Keeping people well informed about delays and cancellations might also be an issue here in so far as it may help alleviate the feeling of dissatisfaction. Is this adequately covered in our survey?
Sometimes it helps not only to focus on the specific problems mentioned in a customer satisfaction survey but also to think about what other factor might be at play that may have made that negative experience worse or better. Taking time to think about such issues can help to ensure that we are measuring everything we need, providing us with useful ideas as to how we might improve our survey.
Consistency vs. Evolution
When managing any tracking survey over time we are faced with two opposing practical considerations:
- The need for consistency: often the most powerful insights from a tracking survey come from the ability to compare like-for-like measures month on month.
- The need for evolution: but markets and customer needs are ever changing. The flip side of maintaining consistency for consistency’s sake is potentially that we end up missing emerging key issues.
This is a difficult balancing act but taking the time to step back and formally review the program at pre-planned intervals is the best way to ensure that nothing important is missed.
However, just as we may need to add to our survey, it is also important to ensure that unused questions are not being retained unnecessarily. Long surveys lead to poor response rates and, after all, you do not want to annoy your customers by taking up too much of their time!
When pop-up surveys fail
How many times have you been onto a website and immediately been confronted with a pop-up survey asking you for your opinion of the website?
What a waste of time and effort some of these surveys are – and what a wasted opportunity!
How can anyone meaningfully answer questions about a website that they have only just clicked through to for the first time and have not yet had the chance to look at?
The answer is that they cannot. Either they will just close the pop-up, or they will fill it out with meaningless information.
Unless people have the option to complete the pop-up survey after they have finished navigating their way around the site, it is a pointless exercise. This might generate a lower response but at least the response will be meaningful.
Is NPS always useful?
NPS (Net Promoter Score) is a widely used tool in customer satisfaction and loyalty research. There is a good reason for that. It has been widely used by many different people for a long time, which means there are some good benchmarks around that you can compare your brand to. It has also often been shown to translate directly to trends in revenue for many businesses.
But what holds true for a great number of brands is not always true for a particular industry or a particular brand. If you are using NPS, track it over time and compare it with internal data on sales revenues and repeat business levels, etc. Is there a relationship? If so, then NPS can be said to be a key metric for the business to use. But if not, you may need to look at more appropriate alternatives or perhaps find a modified way to look at it that is more relevant for the situation in hand.
The elephant in the room
GDPR, CAN-SPAM, data privacy.
Marketeers and market researchers generally do not like spend too much time focusing on such things. But corporate data compliance officers do. Corporate compliance and marketing might be cat and dog, but these days we cannot afford to leave the compliance side of customer satisfaction research to the last minute when designing a survey.
How will the customers be contacted, who will contact them and what information will need to be used by whom? The earlier compliance is involved in these questions the better. After all, we don’t want to end up designing something only for a compliance officer to turn around at the eleventh hour and say, “Oh no, you can’t do that.”
These days devising a solid strategy for handling customer data in a way that safeguards privacy and complies with regulations should always form a critical part of the briefing and proposal process for any customer satisfaction survey.