Customer Satisfaction

Customer chatting over laptop in store

Understanding Customer Experience

Customer experience surveys are widely used in business today.  They are widely regarded as a great way to get invaluable feedback from customers.

But whether you’ve been running a customer survey for years, or whether you’ve never run one, it’s worth reflecting on the benefits they can offer.  And it’s also worth considering how to go about getting the most from them.

Why run a customer experience survey?

Quite simply happy customers are loyal customers and loyal customers deliver more recommendations and more repeat business.

A well-designed customer satisfaction survey will deliver several important business benefits:

  1. By monitoring customer opinion, you’ll have early warning of any potential problems that might cause you to lose revenue further down the line.
  2. It will tell you which improvements will do the most to boost customer loyalty and sales.
  3. It can also tell you what improvements/changes are unlikely to deliver much value.
  4. In short, it prioritises what you need to do to nurture a loyal customer base.

So, when a customer survey is well designed and used effectively, it serves as a positive vehicle for tangible business improvements.

However, it is possible to get it wrong.  And if this happens you might end up with something that delivers limited actionable insight.

So, how do you ensure your customer survey falls into the former category rather than the latter? Here’s a list of pointers I’ve pulled together, based on over 30 years’ experience of designing and running such programs:

Make sure you set the right objectives to start with

Let’s start at the very beginning by looking at the overall business objectives for a customer survey.  Get that wrong and you’ll be lucky to get any real value from the exercise.

The most common reason why such surveys can fail is when they’ve been designed as a box ticking exercise from the very start. If a survey is just used to provide internal reassurance that all is well, it isn’t ever going to serve as an effective agent for change.

Fortunately, this kind of problem is rare.  A more common issue is that sometimes these surveys can be used exclusively in a limited, tactical, way. Here performance scores for each area of the business might be used to directly inform such things as bonuses and performance reviews.  That’s all fine but if this is the only tangible way in which the survey is used, it’s a missed opportunity.

Don’t get me wrong, there’s a value in using surveys to tactically monitor business performance.  But their true value lies in using them to guide business improvement at a strategic level.  If we lose sight of this goal, we won’t ever get the most out of such surveys.

Takeaway:  The primary goal of a customer experience survey should move beyond monitoring performance to providing direction for business improvement.

Using standard templates is just a starting point

Moving on to how we go about designing a survey, the next trap people can fall into is taking the easy option.  By that, I mean running a generic customer survey based on standard templates.

It is easy enough to find standard templates for customer experience surveys online.  Most DIY survey tools will provide them as part of the service.  Standard questions for such things as NPS and generic performance scoring are readily available.

But standard templates are, as the name implies, standard.  They are designed to serve as a handy starting point – not an end in themselves.

There is nothing in any of them that will be unique or specific to any business.  As a result, if you rely purely on a standard template, you’ll only get generic feedback.

That might be helpful up to a point, but to receive specific, actionable, insight from a survey you need to tailor it to collect specific, actionable, feedback from your customers.  And that means you need to ask questions about issues that are specific to your business, not any business.

Takeaway:  Only ever use standard templates as a starting point.  Always tailor customer experience surveys to the specific needs of your business.

Avoid vague measures, focus on actionable ones

It may sound obvious, but it’s important to make sure you are measuring aspects of business performance that are clearly defined and meaningful.  That means it needs to be specific, so there is no confusion over what it might or might not mean when you come to look at the results.

Leaving these definitions too broad or too generic can make it very hard to interpret the feedback you get.

Let’s take an example – ‘quality’.  What exactly does that mean?  It might mean slightly different things in different industries.  And it might mean different things to different people, even within the same organisation.

If your product is machinery, product quality could refer to reliability and its ability to run with minimal downtime.  However, it might also relate to the quality of work the machine produces.  Or perhaps, under certain circumstances, it might refer more to accuracy and precision?  When you think about it, ‘quality’ could encompass a range of different things.

To avoid potentially confusing outcomes of this sort you need to use more specific phrasing.  That way, when you identify an area that needs improvement, it’s clear what needs to be done.

Takeaway:  Ensure you’re testing specific measures of business performance. 

Always Provide a mechanism for open feedback

Not everyone will answer open-ended questions by any means.  Indeed, surveys can fall into the trap of asking too many, leading to a poor response.

However, one or two well targeted open questions will provide invaluable feedback.  It is a golden opportunity to pick up on issues and opportunities for improvement that you haven’t thought of, but which your customers have!

Takeaway: Always include one or two well targeted open questions to elicit feedback from customers.  But don’t add too many or response rates will suffer, and the quality of answers will be diluted.

Ensuring insight is actionable

Of course, you might already have a customer experience survey.  Perhaps it has been running for years.  If it is delivering good value then happy days.  However, that’s not always the case.

Sometimes people find that the outputs from an existing customer experience survey are not particularly actionable.  If that is the case, then it’s a clear warning sign you’re doing something wrong.

There are only two reasons why this ever happens:

1st reason:   Senior management in the business are incapable of driving positive change, even if they are provided with clear direction as to what they should be doing.

2nd reason:  The survey was poorly designed in the first place and is unlikely to ever deliver anything actionable.

Unfortunately, the first of these problems can’t be solved by a survey or any other form of market insight come to that!  But it is possible to do something about the latter!

The answer is simple – you need to redesign your customer experience survey.  Don’t keep re-running it and repeating the same old mistakes.

Takeaway: If your customer experience survey is not delivering actionable insight, stop running it.  You need to either re-design it or save your money and not bother!

Legacy questions and survey bloat

Has your customer survey been running for several years now?  Does the following pattern sound familiar?

  • Every year, the previous year’s questionnaire gets circulated to survey stakeholders for feedback.
  • Each stakeholder comes back with feedback that involves adding new questions, but they don’t often suggest taking any of the old questions away.
  • Some of the new questions (perhaps all) relate to some very specific departmental initiatives.
  • The questionnaire gets longer.
  • The response rate goes down as a result.
  • A year goes by and it may not be entirely clear what has been done with the outputs of some of these questions.
  • The process repeats itself….

Of course, there is a benefit in maintaining consistency.  However, there’s little point measuring things that are no longer relevant for the business.

It may well be time for a more fundamental review. 

Maybe even consider going back to square one and running some qualitative research with customers. Could you be missing something vitally important that a few open conversations with customers could reveal?

Alternatively, maybe you need to run some internal workshops.  How well do current priorities really align with legacy questions in the survey?

Takeaway: If you think your customer survey has become overly bloated with legacy questions, don’t shy away from carrying out a full review.  

About Us

Synchronix Research offers a full range of market research services, polling services and market research training.  We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.

For any questions or enquiries, please email us: info@synchronixresearch.com

You can read more about us on our website.  

You can catch up with our past blog articles here.

What can Cyberpunk 2077 teach us about brand reputation?

Promising Greatness

When Cyberpunk 2077 launched with considerable marketing and general industry hype at the end of 2020, it was a game that promised greatness.

A lot of people were predicting that Cyberpunk 2077 would be the big game of 2021.

CD Projekt Red had established an enviable reputation for quality on the back of the success of its Witcher franchise.  And the early signs were all good.  The setting, the graphics, the story and the atmosphere of the world they’d created all received praise and the pre-launch marketing led to incredibly high volumes of pre-orders for the game.

The stage was set for the game to really explode. 

Hitting problems

However, as we all know, not long after its launch, Cyberpunk 2077 hit problems.  Numerous bugs and performance issues came to light that eventually led to it being pulled from the PlayStation store.  It seemed to have particular problems running on the older console technology.

It wasn’t long before refunds were being offered.  If anything the high profile hype surrounding the launch made matters worse.  The actual delivery had clearly not matched the high customer expectations.

CDPR were now looking at a damage limitation exercise.  The reputational cache their brand had cultivated over the years was now under serious threat.

“It takes many good deeds to build a good reputation, and only one bad one to lose it.”—Benjamin Franklin

Ongoing Problems

Ever since then, CDPR have been working hard to issue bug fixes.

As Gamesrant noted in mid-April “The 1.2 patch notes for Cyberpunk 2077 are the length of a short novella and show a lot of changes.”  But despite this the game still seemed to suffer from more bugs than an anthill. 

It is clear that all this has had an impact and CDPR’s reputation has taken a serious knock amongst its fanbase.  Comments on the steam community posted in April 2021 make it quite clear that the game still faced some serious unresolved problems, even at this time.

“To begin i wish to say i was a Gigantic fan and supporter of Cdpr and this game.  Regardless too many things sold this game to me and i got none of them.”

Mind you, some people still felt Cyberpunk was value for money, even if had not met expectations:

“I look at it this way. I got my money’s worth out of the game even though the experience wasn’t as good as I’d hoped.”

Hype?

And some felt that the pre-game overhype was the real problem here and that those people who bought games on pre-order were potentially setting themselves up for disappointment:

“That’s a ‘buyer beware’ problem.”

Such comments by fans may have been made in defence of CDPR but they also reflect a view that other fans got their fingers burnt precisely because they believe the pre-launch hype and pre-ordered the game.

The potential lesson here that some fans may well take from this is therefore twofold:

  1. Don’t believe the pre-launch marketing (or at least, don’t take it at face value).
  2. Don’t pre-order.

However, the potential longer term damage to CDPR’s reputation could be a good deal more serious, judging by some of the comments:

“I’m just dumbfounded how a company could make a blunder like this. They were in a position that was coveted by large corporations, true customer loyalty and customer love. They didn’t even need a marketing department.

…Now, they are relegated to being just any other gaming company.”

How did it happen?

How could CDPR go so quickly from hero to zero, in the eyes of many of its fans?

Obviously, this was originally a problem caused by rushing an unfinished product to market too quickly.  This would not necessarily have been an issue but for the pre-launch marketing hype that had raised expectations so high.

The scale of the problem was at least partly acknowledged by Co-CEO Marcin Iwiński who issued an apology to fans in January and admitted that CDPR had “underestimated the risk” they had faced in developing the console versions of the game.

Iwiński took responsibility for what had happened and urged people not to blame the developers.  This appears to have been acknowledged internally by ditching plans to link developers’ bonuses to game review scores.

You have to have some sympathy for the CDPR developers.  I’m sure those guys set off with the intention of making the best game possible.  But, as Jason Schreier of Bloomberg News highlighted, several developers felt the deadlines set were over ambitious, given the relative lack of adequate resourcing to meet them.  Unfortunately, you can be the best developer in the world, but if you are forced to work with silly deadlines and overambitious targets you are simply being set up to fail before you even start.  I think you’d have to be a bit hard hearted not to feel their pain. 

Overall, then, it looks like it all comes down to a classic case of overambition.  Overambition in terms of what CDPR set out to do with the technology, overambition in terms of the timescales they set themselves to do it and overambition in the marketing hype they created for the game leading up to the launch.

Ambition is good.  But when the gap between stated ambition and reality becomes wide enough, that’s when you end up with problems such as those that have plagued Cyberpunk 2077. 

How has it impacted on CDPR?

Despite its problems the pre-orders and the initial rush of sales for the game tell a story of financial triumph.  Cyberpunk 2077 pushed CDPR’s 2020 sales to record levels – $562 million (compared to previous record best year of $210m in 2015).

Superficially that would appear to suggest that, despite its problems, Cyberpunk 2077 has nevertheless been a huge success.

However, the $562 million figure is largely the result of the pre-orders, which were driven by the pre-launch marketing rather than by the actual performance of the game.  It also does not fully factor in refunds that followed in the new year.  Nor does it factor in the longer-term costs of all the extensive bug-fixing that followed.  We don’t yet have crucial information on how well the game has faired in terms of sales in 2021, post-launch.

The longer-term impact of the game’s problems may ultimately be that the game fails to perform as well as it might have done.  At one time it was heralded as the big blockbuster for 2021.  And the initial sales looked like the game was very much on track for that. 

In December 2020 it achieved the fourth highest ever number of concurrent players on Steam.  However, at the start of May 2021, it now ranks #62 on Steam – that’s lower than the ranking held by CDPR’s previous blockbuster (Witcher 3).   CDPR’s stock price also tells a tale (dropping by nearly 18% between 6 April and 5 May 2021).  These are not good signs.

Reputation

However, the short to mid-term impacts on CDPR’s financial performance is unlikely to see them turn a loss.  More likely, it will simply see the company struggle to make the kind of money from Cyberpunk 2077 that was envisaged back in the heady days of last November.

More damaging for CDPR may be the longer-term reputational damage.  This might make it very difficult to persuade anywhere near as many fans to pre-order their games in future.  It may also mean that future marketing will not be believed. After all, no one believed the boy who cried wolf, even when the wolf really was coming.  It may also mean that fans will adopt a wait and see policy on their future games – delaying buying until they are able to read customer reviews that confirm that any new release is genuinely up to scratch.

Re-building Reputation

CDPR are not the first company and will not by any means be the last to suffer reputational damage of this kind.  This has happened before in other industries outside gaming.  It is easy to list examples once you start to look; VW emissions scandal, Whirlpool product recall, BP Deepwater Horizon and so on.

A classic case in how to manage such crises is in Johnson & Johnson’s 1982 recall of its best-selling Tylenol painkiller after seven people died in the Chicago area from cyanide-laced capsules.  The key to getting on top of it was to act fast and to be as transparent as possible.  The company incurred a large cost at the time but because they acted so fast, the crisis was over fast.  Their share price recovered within a year.

Samsung provide a more recent example in their response to exploding mobile phone batteries for their Note 7 product in October 2016.  The company realised that its long-term reputation was far more important than any short-term financial hit and invested a considerable amount of money and resources in solving the problem.  They deployed 700 researchers and engineers to test over 30,000 batteries in every extreme condition possible. They even invited in third party auditors.  Once they had identified the problem (by January 2017), they communicated it to the public and announced a new quality assurance program and safety features in a bid reassure consumers.

Samsung’s market share of the mobile phone market today is now very similar to what it was in 2016.  It has consistently managed to maintain a market share of 20% or higher in every year since the crisis.

Lessons Learnt

Reputational damage is the real threat to the future success of CDPR.  Others have been there before and offer some clear lessons as to how to respond to such issues.

First, you need to understand the scope of the problem – both in terms of the actual product issues and in terms of how these have impacted on customer perceptions. 

Then, secondly, you need to act fast and decisively. Don’t skimp on resources. Get the issues sorted out as fast as you can, even if that means taking a significant cost hit in the short term.  Throw resources at it.

Thirdly, you need to be transparent.  Explain the full extent of the problem to customers, what you are doing to fix it and provide realistic timeframes (that you stick to) in terms of sorting it all out.

Fourthly, you need to recognise that you have a PR problem.  Even if you solve the actual product problems, you need to communicate this effectively and keep reinforcing it until the message gets through.

Finally, you can’t be the boy who cries wolf and constantly promise to sort things out, but then shift the deadlines, or offer up only partial solutions that fall short of expectations.  That is the sure-fire way to make things far worse for yourself in the long term.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future.  That means understanding change – whether that be changes in technology, culture, attitudes or behaviour. 

We offer market research services and opinion polling to clients in the gaming, media and leisure industry.  You can read more about this on our website.   We can also help provide market research that will enable your business to understand and track your brand reputation with customers.  You can read more about this service and about how to use market research to help manage your brand reputation here.

Sources

BBC

Bloomberg

Businessinsider

Counterpoint

Denofgeek

Entrepeneur.com

Gamerant

Harvard Business Review

IDC

Pushsquare

Steamcharts

Steamcommunity

TechRadar

The Verge

The Art of Understanding Customer Satisfaction

Today we live in a world where customer expectations are higher than ever.  Next day deliveries or even same day deliveries are becoming the norm rather than the exception.  Stories of bad experiences can go viral on social media in a matter of minutes.  Multiple social media channels, email, chat rooms and call centres provide customers with many ways to communication with a brand and the immediate nature of communications in the modern world creates the expectation that queries can be swiftly dealt with.

For many years now customer experience professionals have made use of customer satisfaction and experience surveys to help them understand their customers.  When tracked over time, they can provide useful insight into what is going well and what could be done better.  But with things changing so fast, how can we ensure these surveys remain relevant and continue to provide useful feedback?

Here are some of my observations and tips for managing customer satisfaction surveys:

Be aware of customer feedback from all sources

Customer needs can change fast.  New technology can create new expectations that rapidly alter customer perceptions of how well or badly a brand is doing.

It is important to ensure that your customer satisfaction survey measures what is important to customers today – not what was important yesterday.  If the market changes, that might mean we ought to be measuring something new or measuring something in a different way.

We can pick up early signs that change might be necessary from a variety of different sources both internal and external to the brand.  Internally, anecdotal feedback from salespeople, accounts people or service engineers are all potentially important.  So too are customer comments on social media – even in the unfortunate event when they present a significant PR problem in the short run, they also provide an opportunity for the business to learn something useful. 

Finally, many customer surveys include open comments questions, and it is always useful to keep an eye on these for anything important that the business might not be aware of. 

The challenge is to gather all this disparate, unstructured, and sometimes entirely anecdotal information together and pick out what’s important.  It is a good idea to set aside some time for a more comprehensive review, where all these different inputs can be collated together and considered, at regular pre-planned times.

Understanding what customer feedback is really telling us

Imagine if a service engineer was late for an appointment and the customer articulates their frustration on Facebook by saying “The engineer was three hours late!”  Or perhaps in another case we hear that “the engineer missed the appointment.”

On the surface of it the problem seems obvious.  The engineer was late, or the appointment was missed.  You review your customer experience questionnaire and sure enough there are questions that appear to cover such things as late or missed appointments.  So, the survey can monitor any issues in these areas as things stand.  That seems clear enough – but is it the full story?

Would these comments have been different if the engineer had called in advance to say they were running late?  And did anyone contact the other customer in advance to let them know the appointment was cancelled?  Keeping people well informed about delays and cancellations might also be an issue here in so far as it may help alleviate the feeling of dissatisfaction.  Is this adequately covered in our survey?

Sometimes it helps not only to focus on the specific problems mentioned in a customer satisfaction survey but also to think about what other factor might be at play that may have made that negative experience worse or better.  Taking time to think about such issues can help to ensure that we are measuring everything we need, providing us with useful ideas as to how we might improve our survey.

Consistency vs. Evolution

When managing any tracking survey over time we are faced with two opposing practical considerations:

  • The need for consistency:  often the most powerful insights from a tracking survey come from the ability to compare like-for-like measures month on month.
  • The need for evolution: but markets and customer needs are ever changing.  The flip side of maintaining consistency for consistency’s sake is potentially that we end up missing emerging key issues.

This is a difficult balancing act but taking the time to step back and formally review the program at pre-planned intervals is the best way to ensure that nothing important is missed. 

However, just as we may need to add to our survey, it is also important to ensure that unused questions are not being retained unnecessarily.  Long surveys lead to poor response rates and, after all, you do not want to annoy your customers by taking up too much of their time!

When pop-up surveys fail

How many times have you been onto a website and immediately been confronted with a pop-up survey asking you for your opinion of the website? 

What a waste of time and effort some of these surveys are – and what a wasted opportunity!

How can anyone meaningfully answer questions about a website that they have only just clicked through to for the first time and have not yet had the chance to look at?

The answer is that they cannot.  Either they will just close the pop-up, or they will fill it out with meaningless information.

Unless people have the option to complete the pop-up survey after they have finished navigating their way around the site, it is a pointless exercise.  This might generate a lower response but at least the response will be meaningful.

Is NPS always useful?

NPS (Net Promoter Score) is a widely used tool in customer satisfaction and loyalty research.  There is a good reason for that.  It has been widely used by many different people for a long time, which means there are some good benchmarks around that you can compare your brand to.  It has also often been shown to translate directly to trends in revenue for many businesses.

But what holds true for a great number of brands is not always true for a particular industry or a particular brand.  If you are using NPS, track it over time and compare it with internal data on sales revenues and repeat business levels, etc.  Is there a relationship?  If so, then NPS can be said to be a key metric for the business to use.  But if not, you may need to look at more appropriate alternatives or perhaps find a modified way to look at it that is more relevant for the situation in hand.

The elephant in the room

GDPR, CAN-SPAM, data privacy. 

Marketeers and market researchers generally do not like spend too much time focusing on such things.  But corporate data compliance officers do.  Corporate compliance and marketing might be cat and dog, but these days we cannot afford to leave the compliance side of customer satisfaction research to the last minute when designing a survey.

How will the customers be contacted, who will contact them and what information will need to be used by whom?  The earlier compliance is involved in these questions the better.  After all, we don’t want to end up designing something only for a compliance officer to turn around at the eleventh hour and say, “Oh no, you can’t do that.”

These days devising a solid strategy for handling customer data in a way that safeguards privacy and complies with regulations should always form a critical part of the briefing and proposal process for any customer satisfaction survey.

Scroll to Top