Research

Image of polling results

Just how accurate are Opinion Polls?

Just after the Brexit referendum result became known, in late June 2016, several newspapers ran stories on how the opinion polls “got it wrong”.

Typical of these was an article in the Guardian from 24th June 2016, with the headline “How the pollsters got it wrong on the EU referendum.”  In it the journalist observed:

“Of 168 polls carried out since the EU referendum wording was decided last September, fewer than a third (55 in all) predicted a leave vote.”

Of course, this is neither the first nor the last time pollsters have come in for some criticism from the media.  (Not that it seems to stop journalists writing articles about opinion polls of course).

But sensationalism aside, how accurate are polls?  In this article, I’ll explore how close (or far away) the polls came to predicting the Brexit result.  And what lessons we might draw from this for the future.

The Brexit Result

On 23rd June 2016, the UK voted by a very narrow margin (51.9% to 48.1%) in favour of Brexit.  However, if we just look at polls conducted near to the referendum, the general pattern was to predict a narrow result.  In that respect the polls were accurate. 

Taking an average of all these polls, the pattern for June showed an almost 50/50 split, with a slight edge in favour of the leave vote.  So, polls taken near to the referendum predicted a narrow result (which it was) and, if averaged, just about predicted a leave result (which happened).

Chart of brexit vote v. result polls

To compare the predictions with the results, I’ve excluded people who were ‘undecided’ at the time of the surveys.  Since anyone still ‘undecided’ on the day would presumably not have voted at all.

Of course, the polls did not get it spot on.  But that is because we are dealing with samples.  Samples always have a margin of error, so cannot be expected to be spot on.

Margin of error

The average sample size of polls run during this period was 1,799 (some had sample sizes as low as 800; others, several thousands).  However, on a sample size of 1,799 a 50/50 result would have a margin of error of +/- 2.3%.  That means if such a poll predicts 50% of people were going to vote leave, we could be 95% confident that between 48% and 52% would vote leave.

In the end, the average of all these polls came to within 1.7% of predicting the exact result.  That’s close!  It’s certainly within the margin we’d expect.

You might wonder why polls don’t use bigger samples to improve the margin?  If a result looks like being close, you’d think it might be worth using a large enough sample to reduce the error margin. 

Why not, for example, use a sample big enough to reduce the statistical error margin to 0.2% – a level that would provide a very accurate prediction?  To achieve that you’d need a sample of around 240,000!  That’s a survey costing a whopping 133 times more than the typical poll!  And that’s a cost people who commission polls would be unwilling to bear.

Data Collection

Not all polls are conducted in the same way, however. Different pollsters have different views as to the best ways to sample and weight their data.  Most of these differences are minor and all reflect the pollster’s experience of what approaches have delivered the most accurate results in the past.  Taking a basket of several polls together would create a prediction more likely to iron out any outliers or odd results resulting from such differences.

However, there is one respect in which polls fall into two potentially very different camps when it comes to methodology.  Some are conducted online, using self-completed surveys, where the sample is drawn from online consumer panels.  Others are conducted by telephone, using randomly selected telephone sample lists.

Both have their potential advantages and disadvantages:

  • Online: not everyone is online and not everyone is easy to contact online.  In particular older people might be less likely to use the internet that often.  So, any online sample would under-represent people with limited internet access.
  • Telephone:  not everyone is accessible by phone.  Many of these sample lists may be better in terms of reaching out to people with landlines than mobiles.  That might make it difficult to access some younger people who have no landline, or people using the telephone preference service.

But, that said, do these potential gaps make any difference?

Online vs Telephone Polling

So, returning to the Brexit result, is there any evidence to suggest either methodology provides a more accurate result?

chart of brexit result v. online and telephone polls

A simple comparison between the results predicted by the online polls vs the telephone polls conducted immediately prior to the referendum reveals the following:

  • Telephone polls: Overall, the average for these polls predicted a 51% majority in favour of remain.
  • Online polls: Overall, the average for these polls predicted a win for the leave vote by 50.5% (in fact it was 51.9%)

On the surface of things, the online polls appear to provide the more accurate prediction.  However, it’s not quite that simple.

Online polls are cheaper to conduct than telephone polls.  As a result, online polls can often afford to use larger samples.  This reduces the level of statistical error.  In the run up to the referendum the average online poll used a sample of 2,406 vs. an average of 1,038 for telephone polls.

The greater accuracy of the online polls over this period could therefore be largely explained simply by the fact that they were able to use larger samples.  As telephone is a more expensive medium, it is undeniably easier to achieve a larger sampler via the online route.

Accuracy over time

You might expect that, as people get nearer to the time of an election, they are more likely to come to a decision as to how they will vote.

However, our basket of polls in the month leading up to the Brexit vote shows no signs of the level of people who were ‘undecided’ changing.  During the early part of the month around 10% consistently stated they had not decided.  Closer to the referendum, this number remained much the same.

However, when we look at polls conducted in early June vs polls conducted later, we see an interesting contrast.  As it turns out, polls conducted early in June predicted a result closer to the actual result than those conducted closer to the referendum.

In fact, it seems that polls detected a shift in opinion that seems to have occurred around the time of the assassination of the MP, Jo Cox.

Chart of brexit result v polls taken before and after the killing of Jo Cox

Clearly, the average for the early month polls predicts a result very close to the final one.  The basket of later polls however, despite the advantage of larger samples, are off the mark by a significant margin.  It is these later polls that reinforced the impression in some people’s minds that the country was likely to vote Remain.

But why?

Reasons for mis-prediction

Of course, it is difficult to explain why surveys seemed to show a result that was a little way off the final numbers so close to the event. 

If we look at opinion surveys conducted several months before the referendum, then differences become easier to explain.  People change their minds over time and other people who are wavering will make up their minds.

A referendum conducted in January 2016 would have delivered a slightly different result to the one in June 2016.  This would be purely because a slightly different mix of people would have voted.  Also, because some people would have held a different opinion in January to that which they held in June.

However, by June 2016, you’d expect that a great many people would have made up their minds.

Logically, however, there are four reasons I can think of as to why there might be a mis-prediction by polls conducted during this period:

  1. Explainable statistical error margins.
  2. Unrepresentative approaches.
  3. Expressed intentions did not match actual behaviour.
  4. “Opinion Magnification”.

Explainable statistical error margins

Given the close nature of the vote, this is certainly a factor.  Polls of the size typically used here would find it very difficult to precisely predict a near 50/50 split. 

51.9% voted Leave.  A poll of 2000 could easily have predicted 49.7% (a narrow reverse result) and still be within an acceptable statistical error margin. 

18 of the 31 polls (58%) conducted in June 2016 returned results within the expected margin of statistical error vs the final result.  If they got the result wrong (which 3 did), this can be explained purely by the fact that the sample size was not big enough.

However, this means that 13 returned results that can’t be accounted for by expected statistical error alone. 

If we look at surveys conducted in early June, 6 returned results outside the expected bounds of statistical variance.  However, this was usually not significantly outside those bounds (just 0.27% on average). 

The same cannot be said of surveys conducted later in June.  Here polls were getting the prediction wrong by an average of 1.28% beyond the expected range.  All the surveys (7 in total) that predicted a result outside of the expected statistical range, consistently predicted a Remain win.

This is too much of a coincidence.  Something other than simple statistical error must have been at play.

Unrepresentative approaches

Not everyone is willing (or able) to answer Opinion Polls. 

Sometimes a sample will contain biases.  People without landlines would be harder to reach for a telephone survey.  People who never or rarely go online will be less likely to complete online surveys.

These days many pollsters make a point of promising a ‘quick turnaround’.  Some will boast that they can complete a poll of 2,000 interviews online in a single day.  That kind of turnaround is great news for a fast-paced media world but will under-represent infrequent internet users.

ONS figures for 2016 showed that regular internet use was virtually universal amongst the under 55s.  However, 12% of 55–64-year-olds, 26.9% of 65–74-year-olds and 61.3% of the over 75s had not been online for three months in June 2016. Older people were more likely to vote Leave.  But were the older people who don’t go online more likely to have voted Leave than those who do?

It is hard to measure the effect of such biases.  Was there anything about those who could not / would not answer a survey that means they would have answered differently?  Do they hold different opinions?

However, such biases won’t explain why the surveys of early June proved far better at predicting the result than those undertaken closer to the vote. 

Expressed intention does not match behaviour

Sometimes, what people do and what they say are two different things.  This probably doesn’t apply to most people.  However, we all know there are a few people who are unreliable.  They say they will do one thing and then go ahead and do the opposite.

Also, it is only human to change your mind.  Someone who planned to vote Remain in April, might have voted Leave on the day.  Someone undecided in early June, may have voted Leave on the day.  And some would switch in the other direction.

Without being able to link a survey answer to an actual vote, there is no way to test the extent to which peoples stated intentions fail to match their actual behaviour.

However, again, this kind of switching does not adequately explain the odd phenomenon we see in June polling.  How likely is it that people who planned to vote Leave in early June, switched to Remain later in the month and then switched back to Leave at the very last minute?  A few people maybe, but to explain the pattern we see, it would have to have been something like 400,000 people.  That seems very unlikely.

The Assassination of Jo Cox

This brings us back to the key event on 16 June – the assassination of Jo Cox.  Jo was a labour politician who strongly supported the Remain campaign and was a well-known champion of ethnic diversity.  Her assassin was a right-wing extremist who held virulently anti-immigration views.

A significant proportion of Leave campaigners cited better immigration control as a key benefit of leaving the EU.  Jo’s assassin was drawn from the most extremist fringe of such politics.

The boost seen in the Remain vote recorded in the polls that followed her death were attributed at the time to a backlash against the assassination.  That some people, shocked by the implications of the incident, were persuaded to vote Remain.  Doing so might be seen by some as an active rejection of the kind of extreme right-wing politics espoused by Jo’s murderer.

At the time it seemed a logical explanation.  But as we now know, it turned out not to be the case on the day.

Reluctant Advocates

There will be some people who will, by natural inclination, keep their voting intentions secret. 

Such people are rarely willing to express their views in polls, on social media, or even in conversation with friends and family.  In effect they are Reluctant Advocates.  They might support a cause but are unwilling to speak out in favour of it.  They simply don’t like drawing attention to themselves.

There is no reason to suspect that this relatively small minority would necessarily be skewed any more or less to Leave or Remain than everyone else.  So, in the final analysis, it is likely that the Leave and Remain voters among them will cancel each other out. 

The characteristic they share is a reluctance to make their views public.  However, the views they hold beyond this are not necessarily any different from most of the population.

An incident such as the assassination of Jo Cox can have one of two effects on public opinion (indeed it can have both):

  • It can prompt a shift in public opinion which, given the result, we now know did not happen.
  • It can prompt Reluctant Advocates to become vocal, resulting in a phenomenon we might call Opinion Magnification.

Opinion Magnification

Opinion Magnification creates the illusion that public opinion has changed or shifted to a greater extent than it actually has.  This will not only be detected in Opinion Polls but also in social media chatter – indeed via any media through which opinion can be voiced.

The theory being that the assassination of Jo Cox shocked Remain supporting Reluctant Advocates into becoming more vocal.  By contrast, it would have the opposite effect on Leave supporting Reluctant Advocates. 

The vast majority of Leave voters would clearly not have held the kind of extremist views espoused by Jo’s assassin.  Indeed, most would have been shocked and would naturally have tried to distance themselves from the views of the assassin as much as possible.  This fuelled the instincts of Leave voting Reluctant Advocates to keep a low profile and discouraged them from sharing their views.

If this theory is correct, this would explain the slight uplift in the apparent Remain vote in the polls.  This artificial uplift, or magnification, of Remain supporting opinion would not have occurred were it not for the trigger event of 16 June 2016.

Of course, it is very difficult to prove that this is what actually occurred.  However, it does appear to be the only explanation that fits the pattern we see in the polls during June 2016.

Conclusions

Given the close result of the 2016 referendum, it was always going to be a tough prediction for pollsters.  Most polls will only be accurate to around +/- 2% anyway, so it was ever a knife edge call.

However, in this case, in the days leading up to the vote, polls were not just out by around 2% in a few cases.  They were out by a factor of around 3%, on average, predicting a result that was the reverse of the actual outcome.

Neither statistical error, potential biases nor any disconnect between stated and actual voting behaviour can adequately account for the pattern we saw in the polls. 

A more credible explanation is distortion by Opinion Magnification prompted by an extraordinary event.  However, as the polling average shifted no more than 2-3%, the potential impact of this phenomenon appears to be quite limited.  Indeed, in a less closely contended vote, it would probably not have mattered at all.

Importantly, all this does not mean that polls should be junked.  But it does mean that they should not be viewed as gospel.  It also means that pollsters and journalists need to be alert for future Opinion Magnification events when interpreting polling results.

About Us

Synchronix Research offers a full range of market research services, polling services and market research training.  We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.

For any questions or enquiries, please email us: info@synchronixresearch.com

You can read more about us on our website.  

You can catch up with our past blog articles here.

Sources, references & further reading:

How the pollsters got it wrong on the EU referendum, Guardian 24 June 2016

ONS data on internet users in the UK

Polling results from Opinion Polls conducted prior to the referendum as collated on Wikipedia

FiveThirtyEight – for Nate Silver’s views on polling accuracy

Inputting credit card data onto a laptop

Working with Digital Data Part 2 – Observational data

One of the most important changes brought about by the digital age is the availability of observational data.  By this I mean data that relates to an observation of actual online consumer behaviour.  A good example would be in tracing the journey a customer takes when buying a product.

Of course, we can also find a lot of online data relating to attitudes and opinions but that is less revolutionary.  Market Research has been able to provide a wealth of that kind of data, more reliably, for decades.

Observational data is different – it tells us about what people actually do, not what they think (or what they think they do).  This kind of behavioural information was historically very difficult to get at any kind of scale without spending a fortune.  Not so now.

In my earlier piece I had a look at attitudinal and sentiment related digital data.  In this piece I want to focus on observational behavioural data, exploring its power and its limitations.

Memory vs reality

I remember, back in the 90s and early 2000s, it was not uncommon to be asked to design market research surveys aimed at measuring actual behaviour (as opposed to attitudes and opinions). 

Such surveys might aim to establish things like how much people were spending on clothes in a week, or how many times they visited a particular type of retail outlet in a month, etc.  This kind of research was problematic.  The problem lay with people’s memories.  Some people can recall their past behaviour with exceptional accuracy.  However, others literally can’t remember what they did yesterday, let alone recall their shopping habits over the past week.

The resulting data only ever gave an approximate view of what was happening BUT it was certainly better than nothing.  And, for a long time, ‘nothing’ was usually the only alternative.

But now observational data, collected in our brave new digital world, goes some way to solving this old problem (at least in relation to the online world).  We can now know for sure the data we’re looking at reflects actual real-world consumer behaviour, uncorrupted by poor memory.

Silver Bullets

Alas, we humans are indeed a predictable lot.  New technology often comes to be regarded as a silver bullet.  Having access to a wealth of digital data is great – but we still should not automatically expect it to provide us with all the answers.

Observational data represents real behaviour, so that’s a good starting point.  However, even this can be misinterpreted.  It can also be flawed, incomplete or even misleading.

There are several pitfalls we ought to be mindful of when using observational data.  If we keep these in mind, we can avoid jumping to incorrect conclusions.  And, of course, if we avoid drawing incorrect conclusions, we avoid making poor decisions.

Correlation in data is not causation

It may be an old adage in statistics, but it is even more relevant today, than ever before.  For my money, Nate Silver hit the nail on the head:

“Ice cream sales and forest fires are correlated because both occur more often in the summer heat. But there is no causation; you don’t light a patch of the Montana brush on fire when you buy a pint of Haagan-Dazs.”

[Nate Silver]

Finding a relationship in data is exciting.  It promises insight.  But, before jumping to conclusions, it is worth taking a step back and asking if the relationship we found could be explained by other factors.  Perhaps something we have not measured may turn out to be the key driver.

Seasonality is a good example.  Did our sales of Christmas decorations go up because of our seasonal ad-campaign or because of the time of year?  If our products are impacted by seasonality, then our sales will go up at peak season but so will those of our competitors.  So perhaps we need to look at how market share has changed, rather than basic sales numbers, to see the real impact of our ad campaign.

Unrepresentative Data

Early work with HRT seemed to suggest that women on HRT were less susceptible to heart disease than other women.  This was based on a large amount of observed data.  Some theorised that HRT treatments might help prevent heart disease. 

The data was real enough.  Women who were on HRT did experience less heart disease than other women.

But the conclusion was utterly wrong.

The problem was that, in the early years of HRT, women who accessed the treatment were not representative of all women. 

As it turned out they were significantly wealthier than average.  Wealthier women tend to have access to better healthcare, eat healthier diets and are less likely to be obese.  Factors such as these explained their reduced levels of heart disease, not the fact that they were on HRT.

Whilst the completeness of digital data sets is improving all the time, we still often find ourselves working with incomplete data.  Then it is always prudent to ask – is there anything we’re missing that might explain the patterns we are seeing?

Online vs Offline

Naturally, digital data is a measure of life in the online world.  For some brands this will give full visibility of their market since all, or mostly all, of their customers primarily engage with them online.

However, some brands have a complex mix of online and offline interactions with customers.  As such it is often the case that far more data exists in relation to online behaviour than to offline.  The danger is that offline behaviour is ignored or misunderstood because too much is being inferred from data collected online.

This carries a real risk of data myopia, leading to us becoming dangerously over-reliant on insights gleaned from an essentially unrepresentative data set. 

Inferring influence from association

Put simply – do our peers influence our behaviour?  Or do we select our peers because their behaviour matches ours?

Anna goes to the gym regularly and so do most of her friends.  Let’s assume both statements are based on valid observation of their behaviour.

Given such a pattern of behaviour it might be tempting to conclude that Anna is being influenced by ‘herd mentality’. 

But is she? 

Perhaps she chose her friends because they shared similar interests in the first place, such as going to the gym? 

Perhaps they are her friends because she met them at the gym?

To identify the actual influence, we need to understand the full context.  Just because we can observe a certain pattern of behaviour does not necessarily tell us why that pattern exists.  And if we don’t understand why a certain pattern of behaviour exists, we cannot accurately predict how it might change.

Learning from past experiences

Observational data measures past behaviour.  This includes very recent past behaviour of course (which is part of what makes it so useful).  Whilst this is a useful predictor of future behaviour, especially in the short term, it is not guaranteed.  Indeed, in some situations, it might be next to useless. 

But why?

The fact is that people (and therefore markets) learn from their past behaviour.  If past behaviour leads to an undesirable outcome they will likely behave differently when confronted with a similar situation in future.  They will only repeat past behaviour if the outcome was perceived to be beneficial.

It is therefore useful to consider the outcomes of past behaviour in this light.  If you can be reasonably sure that you are delivering high customer satisfaction, then it is less likely that behaviour will change in future.  However, if satisfaction is poor, then there is every reason to expect that past behaviour is unlikely to be repeated. 

If I know I’m being watched…

How data is collected can be an important consideration.  People are increasingly aware their data is being collected and used for marketing purposes.  The awareness of ‘being watched’ in this way can influence future behaviour.  Some people will respond differently and take more steps than others to hide their data.

Whose data is being hidden?  Who is modifying their behaviour to mitigate privacy concerns?  Who is using proxy servers?  These questions will become increasingly pressing as the use of data collected digitally continues to evolve.  Will a technically savvy group of consumers emerge who increasingly mask their online behaviour?  And how significant will this group become?  And how different will their behaviour be to that of the wider online community?

This could create issues with representativeness in the data sets we are collecting.  It may even lead to groups of consumers avoiding engagement with brands that they feel are too intrusive.  Could our thirst for data, in and of itself, put some customers off?  In certain circumstances – certainly yes.  This is already happening.  I certainly avoid interacting with websites with too many ads popping up all over the place.  If a large ad pops up at the top of the screen, obscuring nearly half the page, I click away from the site immediately.  Life is way too short to put up with that annoying nonsense.

Understanding why

By observing behaviour, we can see, often very precisely, what is happening.  However, we can only seek to deduce why it is happening from what we can see. 

We might know that person X saw digital advert Y on site Z and clicked through to our website and bought our product.  Those are facts. 

But why did that happen?

Perhaps the advert was directly responsible for the sale.  Or perhaps person B recommended your product to person X in the bar, the night before.  Person X then sees your ad the next day and clicks on it.  However, the truth is that the ad only played a secondary role in selling the product – an offline recommendation was key.  Unfortunately, the key interaction occurred offline, so it remained unobserved.

Sometimes the only way to find out why someone behaved in a certain way is to ask them.

Predicting the future

Forecasting the future for existing products using observational data is a sound approach, especially when looking at the short-term future.

Where it can become more problematic is when looking at the longer term.  Market conditions may change, competitors can launch new offerings, fashions shift etc.  And, if we are looking to launch a new product or introduce a new service, we won’t have any data (in the initial instance) that we can use to make any solid predictions.

The question we are effectively asking is about how people will behave and has little to do with how they are behaving today.  If we are looking at a truly ground-breaking new concept then information on past behaviour, however complete and accurate, might well be of little use.

So, in some circumstances, the most accurate way to discover likely future behaviour is to ask people.  What we are trying to do is to understand attitudes, opinions, and preferences as they pertain to an (as yet) hypothetical future scenario.

False starts in data

One problematic area for digital marketing (or indeed all marketing) campaigns is false starts.  AI tools are improving in their sophistication all the time.  However, they all work in a similar way:

  • The AI is provided with details of the target audience.
  • The AI starts with an initial experiment,
  • It observes the results,
  • Then it modifies the approach based on what it learns. 
  • The learning process is iterative, so the longer a campaign runs, the more the AI learns, the more effective it becomes.

However, how does the AI know what target audience it should aim for in the initial instance?  In many cases the digital marketing agency determines that based on the client brief.  That brief is usually written by a human which should (ideally) provide a clear answer to the question “what is my target market?”

That tells the Agency and, ultimately, the AI, who it should aim for.

However, many people, unfortunately, confuse the question “what is my target market?” with “what would I like my target market to be in an ideal world?”  This is clearly a problem and can lead to a false start.

A false start is where, at the start of a marketing campaign, the agency is effectively told to target the wrong people.  Therefore, the AI starts by targeting the wrong people and has a lot of learning to do!

A solid understanding of the target market in the first instance can make all the difference between success and failure.

Balancing data inputs

The future will, no doubt, provide us with access to an increased volume, variety, and better-quality digital data.   New tools, such as AI, will help make better sense of this data and put it to work more effectively.  The digital revolution is far from over.

But how, when, and why should we rely on such data to guide our decisions?  And what role should market research (based on asking people questions rather than observing behaviour) play?

Horses for courses

The truth is that observed data acquired digitally is clearly better than market research for certain things. 

Most obviously, it is better at measuring actual behaviour and using it for short-term targeting and forecasting. 

It is also, under the right circumstances, possible to acquire it in much greater (and hence statistically reliable) quantity.  Crucially (as a rule) it is possible to acquire a large amount of data relatively inexpensively, compared to a market research study.

However, when we are talking about observed historic data it is better at telling us ‘what’, ‘when’ and ‘how’ than it is at telling us ‘why’ or ‘what next’.  We can only look to deduce the ‘whys’ and the ‘what next’ from the data.  In essence it measures behaviour very well, but determines opinion, as well as potential shifts in future intention, poorly. 

The role of market research

Question based market research surveys are (or at least should be) based on structured, representative samples.  It can be used to fill in the gaps we can’t get from digital data – in particular it measures opinion very well and is often better equipped to answer the ‘why’ and ‘what next’ questions than observed data (or attitudinal digital data). 

Where market research surveys will struggle is in measuring detailed past behaviour accurately (due to the limitations of human memory), even if it can measure it approximately. 

The only reason for using market research to measure behaviour now is to provide an approximate measure that can be linked to opinion related questions measured on the same survey.  To be able to tie in the ‘why’ with the ‘what’

Thus, market research can tell us how the opinions of people who regularly buy products in a particular category are different from less frequent buyers.  Digital data can usually tell us, more accurately who has bought what and when – but that data is often not linked to attitudinal data that explains why.

Getting the best of both data worlds

Obviously, it does not need to be an either/or question.  The best insight comes from using digital data in combination with a market research survey.

With a good understanding of the strengths and weaknesses of both approaches it is possible to obtain invaluable insight to support business decisions.

About Us

Synchronix Research offers a full range market research services and market research training.  We can also provide technical content writing services.

You can read more about us on our website.  

You can catch up with our past blog articles here.

If you like to get in touch, please email us.

Sources, references & further reading:

Observational Data Has Problems. Are Researchers Aware of Them? GreenBook Blog, Ray Poynter, October 2020

A couple gaming

The evolution of gaming – from niche to mainstream

There was a time, perhaps not so long ago, when gaming was viewed as a niche hobby, appealing only to young men.  Many people’s idea of a ‘gamer’ was of a teenage boy glued to a computer screen, leading a semi-reclusive and often nocturnal lifestyle.

But this has changed.  Gaming has evolved significantly since those times and now reaches a far more diverse demographic than ever before.

In 2021, the fact is that most people are gamers.  Using the results from our recent gamer survey, we explore just how widespread and diverse gaming has become.

Old stereotypes persist

Surprisingly, some people still regard gaming as a niche interest, apparently clinging to many of the old stereotypes.

Only this year in September an article appeared in the Telegraph under the headline “Grown men shouldn’t be wasting their lives playing video games.”   The story implied firstly that gaming is a bit of a frivolous waste of time for an adult, and secondly that it’s mainly men, rather than women, who tend to ‘waste’ their time doing it.

Of course, it is strange indeed that gaming should be singled out in this manner. Other equally unproductive leisure pastimes like watching movies, attending a gig or being a spectator at a sports event are, for some reason, considered to be less of a waste of time.  But leaving that aside, the idea that gaming is still the exclusive preserve of geeky teenage boys couldn’t be further from the truth.

Most of us are gamers

The reality is that gaming is now a mainstream interest.  Our survey shows that 76% of adults aged 16+ played a game last year. 

Now you might argue that playing Call of Duty for an hour on your old Xbox360 once last year does not a ‘true’ gamer make.  There are of course a few people who only play occasionally like this.  However, perhaps a better way of looking at it is that around 60% of us play games on a regular basis (at least once a week).

So, the truth is that most adults are playing games regularly.

Gaming is no longer an all-male preserve

The idea that gamers are mostly all men is also false.  The reality is that the majority (57%) of adult women play games every week (compared to 64% of men).  So, male gamers still make up the majority – but only just.

Men and women often engage with gaming differently, however.  They have different platform preferences, different genre preferences and even different preferences in where and how they like to buy their games.

Info graphic of UK gaming habits by gender

Men are more likely to play on the more conventional gaming platforms like PCs or games consoles.  40% of male gamers would solely play regularly on such devices, with only 18% being predominantly mobile gamers.  For women, the reverse is true. Nearly half of female gamers play regularly on mobiles but hardly at all on PC or console platforms.  Only a minority of women (15%) would tend to avoid mobiles in favour of playing regularly on a PC or Console.

Men are more likely to opt from games like shooters, sports and fighting games – all classic genres with a long-established history.  However, for women, casual games are by the far the most popular.  Women also like games with a mystery solving theme (rather than fighting and/or shooting themes) and many women like to play what we’ve termed “table games”.  This relates to a mobile, console or PC version of a conventional game that you might expect to physically play at your table (like sudoku, scrabble, jigsaw puzzles or solitaire).

Platform and genre preference also impact on where people like to buy their games.  Women, with a stronger preference for mobile and casual gaming are much more inclined to source their games from places like the App Store and Google Play.  For men, sources like Amazon and PlayStation Store become far more important.

Gaming across the generations

But is it still true to say that gaming is mainly all about teenagers and people in their early twenties?

No.

68% of youngsters aged 16-24 play games every single week.  This is higher than the average for all adults, so gaming certainly appears to be most popular with this age group.

However, a very similar proportion of 25–34-year-olds play just as often. And if we look at the 34-44 age group we see that as many as 64% also play regularly. 

Gaming remains almost as popular with the 45-54 age group; 62% of whom play every week.

For the 55-64 age group, we do see some decline in interest in gaming.  However, significant numbers of people of this age still play and still play regularly.  41% play games every week.  It seems that many of the old Space Invaders generation are still gaming strong.

Gaming is evolving as a key media for the C21st

Gaming is fast becoming as much a part of our daily leisure activities as watching movies or listening to music. 

As a leisure medium, gaming benefits from the potential to offer a high degree of interaction.  The player does not passively experience a game, they actively participate in it.  If a game designer can get it right, they can create a truly absorbing, interactive experience that will attract a highly engaged audience.

This isn’t simply an opportunity for gaming brands but, increasingly, a fast-evolving opportunity for brands outside the industry.  The medium of gaming provides such brands with a golden opportunity to connect with a highly engaged audience.

eSports events already attract significant sponsorship from brands like Intel, Coca-Cola, Honda and Red Bull.  For a brand like Intel, the tie-in is an obvious one, with gamers being such important consumers of higher-end PCs.  But what about soft drinks and automotive brands? Well, here the tie-in is also compelling; regular gamers account for as many as 70% of adults who say they enjoy soft fizzy drinks and 61% of car owners.

Gaming offers all these brands a means to reach out to highly engaged audiences; some of which may be hard to connect with via other more traditional media.

One thing is for sure, as gaming continues to evolve, it will reach out to wider and more diverse sections of the community. This will bring with it new challenges as well as new opportunities.

For further information about the UK gaming market & Synchronix

The statistics quoted in this article come from our UK Gaming Market Report of 2021. 

This report provides invaluable insight into current trends in the UK gaming market, covering detailed gamer demographics, genre preferences, device preferences, trends in Cloud, eSports audiences, VR, gamer consumer profiles, aspirations for the future and more. 

You can find out more about this report on our website.  

If you wish to follow our weekly blog you can view all our past articles on our website here.

If you have any specific questions about our services, please contact us.

Sources

Playbook – UK Gaming Market Report 2021, Synchronix Research

Mental Health Foundation

Telegraph, Camilla Tominey, September 2021

Bumping elbows

Freedom Day – a British Experiment

19 July 2021 is “freedom day” – the day when the UK government has relaxed the last Covid restrictions in England.  But does it mark a return to normality (whatever that is), or is it, as some have suggested, a dangerous British experiment?

For many of us, the relaxing of restrictions is a welcome relief.  The cost in economic and social terms has been high.  Many businesses in the hospitality sector have really struggled to survive the restrictions.  That’s not to mention the impact on our social lives.  Covid has left some people feeling incredibly isolated and others struggling on reduced incomes. 

Most of us are keen to see life return to normality.  After all, we cannot go on like this forever.  Sooner or later, we must find a way to live with Covid.

Dangerous experiment?

However, some experts have dubbed “freedom day” as a dangerous British experiment. 

In an article in the Lancet, on 7 July 2021, the idea of relaxing restrictions on 19th was branded as dangerous and premature in a letter signed by 100 experts that has since been endorsed by many scientists around the world. 

These experts highlighted five of key risks:

  1. A significant proportion of the population are still unvaccinated (especially younger adults and children).  This will lead to high levels of infection running the risk of leaving many people with long term health problems.
  2. It risks high levels of infection amongst children that will accelerate when they return to school.  This will lead to further significant disruption of children’s education.
  3. Such high levels of infection represent fertile ground for dangerous new strains of Covid to emerge.  This includes the risk of a vaccine resistant strain emerging.
  4. It will lead to more hospitalisations which will place significant pressure on the NHS.
  5. Deprived and vulnerable communities are the most at risk and likely to be hardest hit by rising infection rates.

The experts recommended delaying easing restrictions further until the vaccination program has covered most of the population.  This would imply a delay until late August or possibly early September.

As it stands, on 19 July 2021, the government statistics show that nearly 88% of the population had had their first jab and 68% had received both jabs.  These are high numbers and positions our vaccination roll out well ahead of other countries.  However, it is nevertheless the case that one in three of us are not yet fully covered.

Infections are rising

Infections have risen significantly since the beginning of June, as restrictions have been eased and we have had to deal with the impact of the more infectious Delta variant. 

Graph of UK trends in cases: July 2020 - July 2021

The number of cases is fast climbing towards 60,000 and could easily hit 100,000 by the end of the month.  There seems little doubt now that case numbers will exceed the peak we saw back in January 2021.

The link between cases and hospitalisation: weakened but not gone

It has been claimed that new cases are not leading to new hospitalisations. 

A few weeks ago, we wrote a blog in which we created a Covid Index to allow us to view trends in cases, hospitalisations and deaths in parallel.  So now seems like a good time to revisit this to see how well the data supports this claim.

Unfortunately, if we look at the data, we can see that this claim is not entirely true.

It is now clear that we are seeing a gradual but distinct uptick in hospital admissions.  More cases does mean more hospital admissions, even if the link is now a lot weaker than before.

UK Covid trends INDEX: July 2020 - July 2021

The good news is that the level of increase is not tracking new cases anywhere near as closely as was the case back in January.  At that time rising cases led to a similar rise in both hospitalisations and deaths.  These followed on fairly quickly behind case reporting. 

Now, the immediate impact is much reduced and instead we are seeing a more gradual but nevertheless notable increase in hospitalisation.

Clearly, the fact that so many people are now vaccinated (especially amongst the most vulnerable groups) means that a much higher proportion of infections are now mild or asymptomatic.

A modest increase in deaths

A closer look at trends over the past month also show that as yet we are not seeing any major uplift in deaths.  However, the figures do show a slight overall increase.

Graph of Covid Trends INDEX: Summer 2021 Trends

Overall case numbers have grown to be around four times higher than the average for the past 12 months. 

Hospitalisations are rising at a slower rate but rising, nonetheless.  The current levels of hospital admissions sit are around 75% of the average number recorded over the past 12 months.  At the current rate of increase it is likely that hospital admissions will exceed that average before the end of the month.

Deaths, at present, show only a relatively modest increase since the start of June.  We’d have to say that at present it is too early to fully judge the likely medium-term impacts on death rates.  Death rates are still low at around 10%-15% the average of the rate we have seen in the past 12 months.  However, this is still up from a rate of under 5% recorded during late May and early June.

Likely trend

As vaccination continues to roll out, it will inevitably have an increasingly depressive impact on infections.  However, the relaxing of restrictions will serve as an accelerant – especially amongst young adults who are the least protected and the most likely to wish to congregate together in large social gatherings at pubs and nightclubs.

It is always difficult to predict numbers given the changing nature of the pandemic and the ongoing rolling impact of vaccinations.  However, it seems that by the middle of August we are likely to see:

  • Infection rates; will probably exceed 100,000 cases.
  • Hospital admissions; likely to be c.1,300 per day.
  • Deaths; likely to be c.50-70 per day.

This would mean that hospitalisations would be around the levels we were seeing in mid to late February and deaths at around the levels we were seeing in mid-to-late March. 

With infection rates about 100,000, many people would be forced to self-isolate based on current test and trace rules, which could be very disruptive.  Although the government plans to modify rules of self-isolation for fully vaccinated people, this will not happen until mid-August.

A race to roll out

We are now in a race between a virus that has been given significant freedom to spread on the one hand and a vaccination programme that is fast progressing to a point where the population will be fully vaccinated on the other.  These two factors combine to push the numbers in different directions.

Of course, we have to re-open society and adapt to live with this virus at some point.  Let’s just hope we have not made that step a month or two too soon.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future. 

You can read more about us on our website.  

If you wish to follow our weekly blog you can view all out past articles on our website here.

If you have any specific questions about our services, please contact us.

Sources

Government Coronavirus Data

Lancet, 7 July 2021

The Guardian, 16 July 2021

UK Elections 2021 – How is the political landscape changing?

How is the political landscape changing? As the dust settles on the May 2021 elections, it is worth taking a closer look at the results to see what they might tell us.   

England

The overall results for labour have been bad across the English elections.  Labour has lost seats across many areas and, at the same time, the Tories have picked seats up.

Overall, the Tories have increased their number of councillors in contested areas by + 11%, labour have declined by -20%. 

The LibDems remain the third largest party but have seen little real change.

Other important highlights are that UKIP has now disappeared from the political scene and Reform has failed to hoover up those old seats.  The main beneficiary from the demise of UKIP has clearly been the Tories. 

There has also been a dramatic increase in the amount of Green councillors (more than doubling their number of councillors in contested areas to 151). 

One final important highlight is the fact that there have been gains across the board for a mix of independents (an 18% increase to 255 councillors).

Labour’s highest profile loss was, of course, Hartlepool.  However, here, the story has more to it than meets the eye.

Hartlepool

In Hartlepool the Tories saw their vote increase from 28.9% at the last general election to 51.9% on May 6th.   Much of this gain is likely the result of the disappearance of the Brexit Party as a meaningful political force.  25.8% voted BP in 2019 which, if added to the Tory vote at that time, would total 54.7% – similar to the Tory vote this time around.

Whilst this may explain the Tory win, it does not explain the reduction in the Labour vote (falling from 37.7% in the last general election to 28.7%).  Smaller parties like the Greens may have taken votes from Labour but as the Greens only accounted for 1.2% of the vote, this can hardly explain it.

One point to remember is that the incumbent MP was forced to leave office because of allegations of sexual harassment and victimization.  This may have served to turn some voters away from Labour – but the question remains that whatever their reasons were for not voting Labour, who did those voters turn to?

A big factor appears to have been an independent candidate – Sam Lee, a local businesswoman.   Sam positioned herself as someone who stood up for the local business community and a Westminster outsider.  A vote for her, she claimed, would “show politicians that we are sick of their party games and empty promises”. A vote for her then, was, in many ways, a rejection of the status quo.  Sam polled 9.7% of the vote and, as she didn’t stand in 2019, it looks like she may have taken a fair number of votes away from Labour.

No change..?

So, in 2021, it may be that Hartlepool saw no real significant switch from Labour to Tory at all – that had already happened in 2019, when large numbers of voters changed to the Brexit Party.  And having switched to the BP, the move to voting Tory seems to have been an easy step for many. 

The vote for Sam Lee is significant though.  It shows a considerable number of people prepared to vote for someone outside the political establishment, and a desire amongst many for something quite different from the established parties.

The Red Wall weakens in the North and Midlands

In general, results in the North and Midlands have shown the biggest Tory gains plus the most serious Labour losses.

Again, the explanation seems to lie mainly with picking up former Brexit Party voters rather than outright direct conversion of 2019 Labour voters. 

The biggest Tory gains compared with previous local elections were in Yorkshire and Humberside (+11.2% up), the West Midlands (+9.7%) and the North East (+7.3%).

These marry up with the more significant Labour losses – Yorkshire and Humberside (-4.5%), the West Midlands (-5%) and the North East (-4%).

Labour losses and Tory gains were less significant elsewhere in England.

So, are we witnessing a sea-change in voting patterns in the North driven by regional factors or is it something more complicated than this? 

It is true that the so-called Red-Wall has clearly been seriously eroded in many parts of the North.  However, Labour has performed well in the area in certain large cities.  Could it be that this is more about how voting patterns are changing in metropolitan v non-metropolitan areas, than it is about changing attitudes in the North?

The Metropolitan Effect

Labour has performed well in northern metropolitan areas such as Liverpool and Manchester, showing that it can hold its own there under the right conditions.

In Manchester, Labour even gained ground.  Perhaps this was due in no small part to the charismatic Andy Burnham but the numbers tell a convincing tale.

Labour increased its share of the vote on the first choice for Mayor from 63.4% in 2017 to 67.3% in 2021. The Tories slipped from 22.7% to 19.6%.  Here, the lesser parties were very much out of the picture.

The Labour Mayoral vote also held strong in Liverpool.  No sign of any cracks in the Red Wall in these major northern cities; a quite different story from the story we see in less urban areas. 

So why is the metropolitan vote in the North so different from the trends we see elsewhere?

The Role of ‘Englishness’

Will Jennings, Professor of Political Science and Public Policy at Southampton University, feels that the migration of voters to the Brexit Party and then to the Tories has much to do with the emergence of a strong English national identity.  This tends to view the Tories as a party that is positive about the English and Labour as essentially mediocre about, or even hostile to, an English cultural identity.

Evidence for this can be found in BSA research that looked at the motives behind voting Leave/Remain in the Brexit vote.  This found that people who identified themselves as ‘British’ and not ‘English’ in England, voted 62% in favour of Remain.  However, 72% of people who identified themselves as ‘English’ and not ‘British’, voted in favour of Leave.

This sentiment, Jennings would argue, has translated into a vote for the Brexit Party in 2019 and has now converted into a Tory vote.  Parts of the North which have switched to Tory are often areas where this sense of ‘Englishness’ is strongest.

However, cities such as Manchester and Liverpool are more cosmopolitan in character and have strong and distinct local identities (as Mancunians or Scouse).  As a result, the tendency to strongly identify with an ‘English’ nationalist identity is less evident.  This in turn translates into a much-reduced willingness to switch away from Labour to the BP or Tories.

Treating the ‘North’ as a single homogenous area would therefore appear to be a gross over-simplification.

A different picture in Southern England

In the South, there was less dramatic change in voting patterns.  Although we saw some shift to the Tories in the council elections, the change was nowhere near as significant or dramatic as that seen in political landscape in the North.

However, there are a couple of interesting results that are worth pulling out – both Labour Mayoral wins.

The first is the result for Cambridge and Peterborough.   On the first choice alone, the Tories would have won (Tory 41%, Labour 33%, LibDems 27%).  However, once the LibDems were knocked out of the picture the second-choice votes for these voters were overwhelmingly Labour.  The result enabled Labour to win (just) by 51%. 

The second result is for the West of England Mayor (which covers Bristol, Bath and North East Somerset and South Gloucestershire).

Here Labour increased its vote from 22.2% to 33.4% in the first round.  The Tories also actually did a little better (increasing from 27.3% to 28.6%).  The LibDems, again, saw limited but negative change (20.2% down to 16.3%) and the Greens again, saw progress (up to 21.7% from 11.2%). 

Again it is worth noting that the presence of a strong independent candidate can affect the results.  In 2017 such a candidate polled 15% of the vote but this time around, no such candidate stood.

This does raise the possibility that a future cooperative arrangement between Labour, Greens and LibDems in the south could potentially cause significant damage to the Tories in some parts of the southern political landscape.  However distant and unlikely such a prospect might seem today.

What about Scotland?

The results in Scotland, of course, have been quite different from anything we see in England.

Here we have seen the SNP make modest progress – increasing their share of the vote from 46.5% of constituency votes at the last parliamentary election in 2016 to 47.7% now.  The Tories saw little change in fortune (21.9% share now vs 22% in 2016).  Labour, too, saw limited change (21.6% down from 22.6%).

The SNP have consolidated and built on their dominant position even if they have not achieved an outright majority.  Some have suggested that they owe their electoral success at least in part to the general perception that Nicola Sturgeon has handled the Covid crisis well. 

One might make a similar observation across the UK.  This is that the light beckoning at the end of the Covid tunnel tends to favour the incumbent administrations – the SNP in Scotland and the Tories in England.  There is no doubt some truth in this and, if so, we can see this pattern repeated in Wales.

What about Wales?

Wales bucked the pro-Tory trend we see in England.  Here comparisons with England are more interesting because Wales, like England, voted Leave (whereas Scotland did not).  However, UKIP and latterly the Brexit Party have never been quite the force in Wales that they were in many parts of England (the Brexit Party registered only 5% of the Welsh vote in the 2019 election). 

Here the Tories have not managed to benefit anywhere near so much from picking up former UKIP or Brexit Party voters.  In 2016 the Tories got 21.1% of the constituency vote, which they have been able to increase to 25.1% this time around.  This no doubt reflects picking up some of the old UKIP votes (which accounted for 12.5% of the votes in the 2016 assembly election).

However, in Wales Labour have increased their share of the vote from 34.7% to 39.9%. Plaid Cymru have remained at pretty much the same level (20.7% vs 20.5% last time).

As with elsewhere, it may well be that the incumbent administration is benefiting from the feeling that we are headed in the right direction Covid-wise. 

The lack of the BP/UKIP factor in Wales in the political landscape meant there were only a limited number of these voters for the Tories to potentially pick up.  This supports Professor Jennings’ view that it is the sense of Englishness that has driven a migration of votes from labour, via UKIP and the Brexit Party, to the Tories.  The absence of the ‘Englishness’ factor in Wales potentially explains why such a pattern has not been repeated here.

In conclusion

It is probably worth concluding by saying that we ought to be very careful in what we read into these results.  The 2021 elections have occurred at a time when so much is in a state of flux.  The Covid crisis makes these times most unusual indeed. 

In a few years’ time when (hopefully) Covid no longer dominates our lives, we will be living in a vastly different world.   Also, we cannot yet say what the longer-term impacts of Brexit may be.  We are also only at the very beginning of the Tory levelling-up agenda.  Much has been promised, but what will be delivered?

This election has highlighted some important emerging trends, but the events of the next few years could yet see things change quite radically.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future.  That means understanding change – whether that be changes in technology, culture, attitudes or behaviour. 

We offer market research services, opinion polling and content creation services.  You can read more about this on our website.  

If you wish to follow our weekly blog you can view all out past articles on our website here.

Sources

Election Results from BBC England

BBC Scottish Election Results

Welsh Election Results from the BBC

Sky News Election Takeaways

BSA

Hold the Front Page

Marketing Personas – powerful tool or pointless exercise?

What are marketing personas?

You have probably heard of marketing personas (or buyer personas as they are otherwise known).  The purpose of creating marketing personas is to paint a picture of the audience you are trying to reach.  Used well it can be great tool for segmentation marketing.  But used poorly it can, unfortunately, end up being a pointless exercise. 

Where do marketing personas come from?

Whilst many of us may have seen the end result, it is often not entirely clear how these personas were created, or even by whom.

Perhaps a group of sales and marketing people huddle together in a workshop and “brainstorm” a bunch of personas. 

Or perhaps they were created based on some focus groups that some of your marketing team had commissioned. 

Or maybe they were developed from insights generated from a larger scale quantitative market research survey.  Or perhaps all of these. 

How they were created does matter.  They should of course, be based on a broad group of real actual customers – and not just plucked out of the air based on a customer meeting that one person had with a single customer!

How do they help?

They allow us to bring to life different segments of our market and, in doing so, allow us to better target them and serve their needs. Or do they?

“Segments must be Measurable, Substantial, Accessible, Differentiable, and Actionable.”

Philip Kotler

Unfortunately, sometimes, people can go through a lengthy exercise in creating fancy personas only to find that they aren’t of much actual use.  They can look good.  They look as though they make sense.  You can even bring them to life with infographics, videos and swish artwork.  That’s all cool … but what use are they?

When such an exercise goes wrong you can end up with something that looks very impressive but is hard to relate to any of the questions or challenges that your business actually faces. 

But it doesn’t have to be like that.  Done right, marketing personas can be an extremely powerful business tool. 

So how do you get it right?

Make sure you start with some clear business objectives

First things first.  You must always start with a good reason why you want to create marketing personas in the first place.

That of course means you need to start some tangible business questions.

Obvious questions usually include the following:

  • Who is most likely to buy our products?
  • What makes them buy?
  • How do we reach them?
  • What do we need to do to persuade them to buy?

Once you have these questions you then know what you are trying to achieve. Your success criteria for the entire exercise are then clear and simple – can the personas we have created answer our original questions.  Keep these questions clearly in mind throughout the exercise – they are your guiding light and anchor point for the entire project

Do you need them?

An important question to ask before you get too far with generating your personas is:  “Do I even need to generate multiple market personas” ?

Generating multiple market personas implies you are adopting a market segmentation strategy.  That means you want to divide your customers and prospects into different groups and adopt a different marketing approach for each of these groups.

This more targeted approach can bring great rewards. 

But to develop and execute specific campaigns and strategies to address different market segments requires resources.  Not everyone will have the resources or the time to invest in this.

This comes back to our original questions – why are you doing this?

Sometimes, people develop market personas for the wrong reasons.  Sometimes what you really need is something simpler. 

Maybe all you need is a good profile of your target customers and prospects as a single group.  One group of people who you can focus your marketing resources on, directing a specific approach.

In this case you just need a market profile that simply allows you to describe those people who represent good prospects for targeting to your marketing agency in an accurate and meaningful way.

Make sure Personas integrate into your marketing strategy

Although it sounds very obvious, people can sometimes get this wrong and, when they do, generating marketing personas can be a waste of time.

If you decide you need marketing personas then this should form an integral part of your marketing strategy.  The insight you gain from the personas will help you to design a targeted segmentation strategy that will shape and inform your marketing.

You don’t need to generate marketing personas if you have already determined what your strategy will be.  The whole point of creating them is to help formulate your strategy.

Personas are powerful tool for briefing your marketing agency

When you brief a marketing agency, the first thing they will want will be for you to paint a picture of your target audience.  Who are you trying to reach?  What do you need to say to them?

The more they know about the audience the better.  The more specifically they can then target any media campaigns and the more engaging they can make the messaging.

With well crafted and meaningful marketing personas you should be able to provide them with everything they need to create a very targeted and relevant campaign for you.

How do you know your Marketing Personas are any good?

OK, so you have completed the process of pulling together what you need to create your personas.  You believe they will answer the questions you set out at the start of the process.  Now you need to bring them to life and present them to colleagues, to your marketing agency and your partners.

So now you need to create a concise and meaningful guide that explains what these personas are and why they matter.

By this time you may have been working on the project for a few weeks.  So there is a real risk that you, your market research agency and anyone else closely involved might not be able to see the wood for the trees.  So it is worth taking a step back and looking at what you have, to make sure it does indeed give you everything you need.

You can check this by asking a few basic questions:

  • Is the Marketing Persona clearly defined and easy to understand?  How easy is it to explain to a colleague who has had no involvement in the project?
  • Does it tell you how big/small the market segment it represents actually is?  Is this particular Marketing Persona representative of 50% of your potential market or 5%?
  • Does it clearly outline the opportunity that this market segment represents?  Will these people buy from you?  Will it be an easy or a hard sell?  If your salesman is speaking with one of them, what are the chances that you will make a sale?
  • Does it tell you what this Persona likes and dislikes?  What kind of things are likely to interest and engage with them?  And what might leave them cold?
  • How is this particular Persona different from the other ones?  Can you easily explain why this Persona is different?  What do you need to do differently to engage with this group that you do not need to do with any of the other Personas?
  • What media channels should you use to communicate with this Persona and how is this different from the others?
  • What kind of marketing messages do you need to design in order to ensure that people in this segment will listen and engage with you?

If you are able to reach a point where your marketing personas can be used to provide meaningful and actionable answers to each of these questions, then you know you have created something of real value.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future.  That means understanding change – whether that be changes in technology, culture, attitudes or behaviour. 

If you are looking to create market personas, we can provide a market segmentation services that you enable you to generate these in a structured and successful way.  You can read more about how we do this on our website.

Covid in Numbers – why have some countries suffered more than others?

As vaccinations roll out, we are beginning to see some light at the end of the covid pandemic tunnel.  It will take a few months yet, but it seems almost unreal to think that by the end of 2021 we may finally be back to some kind of post-pandemic normality.

Now seems like an appropriate time to take stock.  What might we learn from the traumatic events of the past year?  We might ask ourselves the question – why is it that some countries appear to have faired so much worse with Covid than others?  How have some countries experienced relatively low death rates, whereas others have experienced such tragically high numbers?

The Worst Hit

If we take a look at the numbers, the worst hit of the larger countries include many European nations (eight from the top ten worst affected) as well as the USA and Mexico.  All ten have experienced more than 150 deaths per 100,000 population.  The worst affected at the time of writing is the Czech Republic, with over 230 deaths per 100,000.

Other countries have escaped relatively lightly.  Amongst the other European nations Germany has suffered significantly less – ie, experienced a death rate less than half that of countries like the UK, Belgium and Hungary.

Healthcare Quality

One thing we might look at is the quality of healthcare.  More developed countries generally have more established, advanced and comprehensive healthcare. That being the case, such nations should be better placed to deal with a pandemic such as covid.  Unfortunately, it is plain to see that there must be a lot more to it than this; with countries like the USA, UK and Italy all suffering badly despite their relatively advanced healthcare systems.

India has a comparatively small proportion of deaths (under 12 per 100,000 on official figures).  Despite this, India’s healthcare system is ranked only the 112th most efficient healthcare system in the world according to the WHO.  The USA is ranked 37th, the UK 18th and Italy 2nd.  Clearly there must be other factors at play.

One factor is potentially under-reporting.  One source estimated that this could mean that the true level of covid deaths is as much as five times larger than the official numbers in India.  However, even taking that into account, India’s death rates have still been significantly lower than those of the ten hardest hit nations.

Whilst the standard of healthcare has no doubt played some role here, there are clearly other aspects involved.

Population Demographics

One factor is population demographics.  Older patients are much more likely to become seriously ill and die from covid than younger ones.  Here India’s age demographics counts in her favour. 

Only 6% of India’s population is aged over 65.

Compare this to most European countries and the difference is striking – with around 20% of population in the hardest hit European countries being aged over 65.  Italy was the most vulnerable in this sense, with 23% aged over 65 before the pandemic hit.

Of the 10 hardest hit countries, 8 were nations where 19% or more of their populations were aged over 65.  The USA has a slightly younger demographic (16% over 65s) which would help to limit its vulnerability a little but is still clearly more exposed than somewhere like India.

Mexico represents the odd one out here.  Only 7% of Mexicans are aged over 65, giving the country a youthful demographic that is closer to that we see in countries like India.  We must therefore look for other explanations as to why Mexico has suffered so badly.

Urbanisation

Covid spreads best in environments where people live in close proximity to each other and, in general, people living in towns and cities are more likely to live in closer proximity with others.  Indeed, although India in general has seen lower death rates, it has nevertheless suffered more in major urban centres like Mumbai.

Many of the countries that have been worst hit have high levels of urbanisation which has likely contributed to higher death rates.  Belgium has an particularly high level of urbanisation (with 98% of its population living in urban environments), making it especially vulnerable in this sense.  Several other countries on the list have high urbanisation levels (80%+), namely the UK, USA, Mexico and Spain.  A country like India has much lower level of urbanisation overall (36%), which means its population is more widely dispersed and people in more rural environments are therefore less likely to come into frequent contact with others who might be affected.

Lockdowns

The lockdown measures taken by different countries at different times would also have an impact.  However, as these measures are often taken in response to the pandemic getting out of control in the first place, it is no surprise to find that many of the countries with the worst rates have had to impose longer and stricter lockdowns.

According to the Oxford Covid Government Response tracker those countries on our list that have taken the strictest measures for the longest periods of time over the course of the pandemic would include the UK and Italy.  This has not prevented either country from registering high rates however, although it has no doubt helped to prevent the problem getting even worse.

Based on these measures, those countries which have been laxer from this list would include Bulgaria (most notably), the Czech Republic, Hungary and Belgium.  So, it is possible that in these cases a more relaxed approach has contributed to a higher death rate.

Test and Trace

Another factor would be the efficiency of a country’s testing and tracing regime.  On this measure Mexico does especially badly, having only managed to test 41 out of 100,000 people in its population to-date – far fewer than any other country listed.

Nevertheless, the UK has now tested 1,585 people out of 100,000 – more than any other country on the table.  Despite this, the UK still has the third worst death rate overall.  But here the devil lies in the detail.  The UK has massively improved its testing regime over the course of the pandemic but, initially, the UK lagged behind somewhat.  During the first 60 days after the first five UK deaths the British managed to test just 23 people in 100,000.  This compares poorly to a number of other affected countries. 

Germany’s lower death rate overall is partly down to its test and trace efficiencies, especially during the early phase of the pandemic.  The Germans managed to test 37 people in every 100,000 during the first 60 days after their fifth death.

Of all the countries on the list, Mexico stands out as the most behind on test and trace at every stage of the pandemic.  No doubt this is a major reason as to why the country now ranks so highly in terms of death rates.

International Travel

Another factor is the level of international travel.  Countries that experience a large volume of people travelling through their airports and transport hubs are more likely to import covid from overseas. 

Of course, travel restrictions now apply across many nations but this was not always the case.  The UK and the USA would, under normal circumstances, see significantly more international traffic than most other countries.  And so they, along with Hungary, would have been most exposed to importing infection in the absence of strict border controls and quarantine measures.

The Czech situation

It is worth taking time to consider the Czech situation, since this country has experienced the most serious problems to-date. 

In terms of many of the risk factors nothing immediately stands out that would explain why it tops the list.  The level of urbanisation is high but not unduly so at 73%.  Likewise, its population demographic is not notably different from many other European countries (20% aged 65 plus).  It also receives limited international traffic compared to many other countries.

However, over the course of the pandemic its lockdown measures have been the second laxest of the ten worst affected countries.  It is also the case that its figures for test and trace do not appear as comprehensive as many others (although it appears to be testing a reasonable amount of people now).

According to Dr. Rastislav Maďar, the dean of the University of Ostrava’s medical school, the Czech situation can be attributed to three key mistakes.  The first of these was a failure to make mask wearing mandatory, the second a decision to open shops in the run up to Christmas and the third a failure to react quickly enough to the presence of new strains in the new year.

Key lessons learnt

Hopefully, it is clear to see that no single factor or measure can in and of itself entirely explain why any particular countries experiences a high death rate.  There are many factors working together in combination. 

However, the nature of the pandemic is such that it is clear that just a few missteps at any stage can very quickly lead to the situation rapidly deteriorating.  Hopefully, we can all learn from that and avoid making any future silly mistakes in the final stages of the pandemic.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future.  That means understanding change – whether that be changes in technology, culture, attitudes or behaviour. 

We provide a wide range of market research and data services.  You can learn more about our services on our website.  Also, please check out our collection of free research guides for more information on specific services offers.

Sources

John Hopkins University https://coronavirus.jhu.edu/map.html

United Nations https://population.un.org/wup/Download/

Our world in data: https://ourworldindata.org/grapher/covid-stringency-index

Worldbank: https://data.worldbank.org/indicator/SP.POP.65UP.TO.ZS?name_desc=false&view=chart

ITV https://www.itv.com/news/2020-12-09/is-indias-covid-19-death-rate-five-times-higher-than-official-figures-suggest

CNN https://edition.cnn.com/2021/02/28/europe/czech-republic-coronavirus-disaster-intl/index.html

Scroll to Top