Research

Illustration of plant growth out of money

Do lower taxes stimulate growth?

Can the simple act of lowering taxes stimulate growth?  We now know (thanks to Liz Truss) that, when unfunded, tax cuts can certainly trigger economic chaos.  But even if they are properly funded, the question remains, will it really foster growth?

There are many who would argue that it would.  Some even go so far as to present a policy of low taxation as a silver bullet – a golden ticket to growth and prosperity! 

Bold claims indeed – but is it true?  What is the evidence? 

The search for evidence

How would we test the veracity of this concept?  It is not uncommon to see people quote anecdotal examples to support the contention.  But highlighting a particular instance where reduced tax has been followed by positive growth in a single country is potentially no more than a propaganda exercise.  What about the big picture?

There are many countries in the world.  Collectively they represent a wide variety of different economies and different tax regimes.  In a great many cases we have access to a lot of historic data on growth, tax policy and so on.  Surely it cannot be beyond the wit of man to compare tax policy to outcomes across many countries over time.  Can it?

This is not as simple as it sounds but it is possible.  The main problem is to make sure we compare apples with apples.

What is growth?

Firstly, we need to agree some kind of sensible definition of what we mean by ‘growth’.  At a simple level we might look at growth in terms of GDP.  However, GDP just tells us the total monetary value of an economy.  Growth in GDP is, of course, good.  However, it tells us very little about how wealthy people who live in that country are.  That is because it takes no account of population size.

Think of it this way:

  • 100 people living on an island earning $5k each per year,
  • So collectively they earn $500k in a year.
  • Only one person lives on the neighbouring island.  He earns $100k a year.
  • Which island is wealthier?

One island generates five times the money of the other.  However, clearly the guy living alone on the second island is wealthier.  For this reason, it is often better to look at per capita GDP (GDP per head of population) as a more accurate measure of wealth. 

Measuring taxation

It might seem like a simple thing to compare taxation between one economy and another until you come to try and do it.  It isn’t. 

If you think about it, any given country has a wide variety of different taxes and tax rates.  One might have high sales taxes and low income taxes.  Some might have forms of taxes that few other countries have.  In some countries individuals might pay limited personal tax but companies pay a lot (or vice versa). 

Hence, what we need to do is look at the overall tax burden when all these different elements are bundled up.  i.e., the proportion of wealth generated that is being taken in tax.

Oddities

In order to compare like with like we probably ought not to look at particularly odd or weird situations.  Ireland is a case in point.  So much so that Nobel Prize winning economist Paul Krugman labelled the phenomenon ‘Leprechaun economics’.  So, what happened?

In 2015, Apple changed the way it reported its accounts.  It shuffled a large chunk of revenue, previously reported elsewhere, into Ireland.  Now, suddenly, in a single quarter, Ireland’s national GDP jumped up 26%!  This had nothing to do with Ireland and everything to do with an Apple accounting policy change.

This distortion makes it very misleading to look at the Irish economy in the same way as other economies.

Of course, the biggest ‘oddity’ of all in recent years has been covid.  The pandemic has had a massive effect on global economies since it first struck in early 2020.  Attempting to measure the impact of tax policy on growth in the period after 2020 would therefore be very difficult to say the least.

Snapshot blindness

We often look at growth quarter by quarter or year on year.  That is fine but it is nevertheless potentially just a snapshot.  It can be distorted by unusual events that are unique to a particular country or year.  This can present a picture in a particular period that is very different from the underlying trend.

To measure the impact of taxation on growth we really need to be thinking in terms of longer-term impacts – performance over a longer period of time than just a year.  The sun coming out on one day does not a summer make.

Base effect

One of the biggest problems in measuring growth between different economies is known as ‘base effect’. 

So, what is base effect? 

Imagine the following scenario:

  • Brian earns $10k per year in his very first job.
  • After his first year, he gets a pay rise and a promotion.  He now earns $15k per year.
  • Brian’s salary has grown by 50%!
  • His friend’s mother Anne earns $200k per year.
  • Anne also gets a pay rise at the end of the year.  Her salary increases to $240k.
  • Her pay has only gone up by 20% but Brian’s has gone up by 50%!
  • So … is Brian doing better than Anne?

Of course not. 

This is a classic example of base effect.  It is the main reason why developing / emerging economies grow very fast compared to developed economies.  They are starting from a very small base!

If we are going to meaningfully compare economic performance, we need to be mindful of distortions like base effect.  Unfortunately, virtually all economies are of different sizes.  Nevertheless, for a meaningful comparison we ideally need to compare economies at a similar stage of economic development.

Data limitations

The reality is that more economic information is available for some countries than others and some report slightly differently.  The OECD strives to record comparable metrics for its member states and for several other key countries as far as is possible.

However, the fact remains that reasonably comparable information on tax and growth is not always available. 

This naturally places limitations on the extent to which we can meaningfully compare different countries.

Designing a test to measure the impact of tax

So, how might we go about attempting to measure the impact of tax policy on growth?  That, of course, sounds like a good title for an economics PhD thesis, which is not what we can realistically attempt in a short blog. 

Nevertheless, it is possible to look at some high-level measures for a basket of countries to see if we can observe any patterns over time.  So, with this in mind, let’s define our parameters:

  1. Time period: 2010-2019.  This gives us 10 years of data to look at that should capture trends over a reasonably long period.  It also has the merit of being the most recent period we can pick before covid starts to distort growth figures.
  2. We’ll measure the overall tax burden in comparison with growth in per capita GDP.
  3. To avoid the worst impacts of Base Effects, we’ll focus on a basket of developed economies (defined as having a per capita GDP in excess of $30k in 2010 at 2015 prices).
  4. This would potentially include Ireland, but due to the distortions unique to this country already mentioned, we will exclude them.

What tools can we use to find a pattern?

Let’s say we have data for 25 countries over 10 years.  It tells us, in each case, the tax burden and the growth experienced.  How do we know if there is a pattern, i.e., that they are inter-related in any way?

We have a couple of tests we can apply in the initial instance:

  1. Correlation
  2. Regression analysis

These tests measure slightly different things. You have probably heard of them and may have some familiarity with them.  However, perhaps you are one of the many people who is not entirely clear on the specific differences between the two.

What is correlation and how does it work?

Correlation is a statistical technique that measures the strength of the relationship between two sets of data.  It generates a number between -1 and +1 to indicate the strength of a relationship.  Technically we call this the coefficient (or simply ‘r’).

A positive number indicates that a pattern exists and that, as one number increases, the second number was also observed to increase.

A negative number indicates the reverse – that one number declines as the other increases.

A number close to 1 indicates a very strong relationship.  Close to 0 indicates a virtually non-existent relationship.

So, for example, we might compare number of ice cream sales to average temperature.  If we see a correlation of +0.8, this tells us that there is a strong relationship and that as temperature rises, ice cream sales also rise.

Correlation and causation

Correlation is not causation of course.  Correlation might tell us that ice cream sales rise when polar melt increases.  But that does not mean that buying ice cream causes the ice caps to melt!  In this example it just means that both are impacted by a third variable that we have failed to take into account – i.e. temperature.

In the case of what we’re trying to do here, a negative high correlation between tax burden and growth indicates a strong relationship between a high tax burden and poor growth.

As a general guide to the strength of a relationship we’d typically consider:

Correlation coefficientStrength of the relationship
1 or -1Perfect!
0.7 or -0.7Strong
0.5. or -0.5Moderate
0.3 or -0.3Weak
0None

What is regression analysis and how does it work?

Correlation seeks to measure the strength of a relationship but no more.  Regression analysis goes one step further.  It seeks to build a model to predict how one factor might change as a result of a change in another.

So, in this case, a regression analysis would seek to predict how an increase or decrease in the tax burden might impact growth.

Typically, regression produces two things.  The first is a formula that you can use to predict an outcome.  Hence, you can use a regression formula to tell you what growth you might expect if you set the tax burden at a particular level.

The second key output from a regression analysis is a measure of how reliably this formula can predict an outcome.  The technical term for this measure is ‘R2’ (not to be confused with the ‘r’ we use in correlation). Even more confusingly, also like correlation, that number can be between 0 and 1.  However it means something quite different.

If the number is 1, the equation is a strong fit to the data.  However, if the number is zero, the equation is pretty much junk.  So R2 simply tells us how well we can we predict growth rates based on the tax burden.

Comparing taxation levels with growth

In this analysis we compared a total of 25 OECD countries in the period 2010-2019.  We looked at the average overall tax burden in each case over the period and compared it to growth GDP per capita over the same period.  Ireland and countries with a GDP per capita under £30k were excluded.

The countries were:  Australia, Austria, Belgium, Canada, Czech Republic, Denmark, Finland, France, Germany, Iceland, Israel, Italy, Japan, Luxembourg, Netherlands, New Zealand, Norway, Portugal, Slovenia, South Korea, Spain, Sweden, Switzerland, UK, USA.

This results from this analysis are shown in the scatter plot below:

So, do high levels of tax impede growth?

The correlation between tax burden and growth rates experienced is -0.4.  This indicates there is a relationship between the two.  High tax levels do tend to correlate with weaker growth.  However, the relationship is moderate to weak.  We can see this clearly from the plot.  There are countries with a tax burden above 40% that experienced stronger growth than some with a tax burden under 30%.

In terms of regression, it is possible to plot a trend line that demonstrates how taxation in general might impact growth.  This indicates, for example, that an economy with a taxation level at around 25% might expect a growth of around 15% over ten years.  In contrast, the model suggests a tax rate at 45% might expect growth at only around half this level.  However, this model is a terrible fit, with an R2 of only 0.16!  We only need to compare the scatter plot to the trend line to see there are numerous exceptions to the rule.  There are plenty of examples of countries experiencing half or double the predicted growth at various different tax levels.

Conclusion

All this suggests that the tax burden only has a limited depressing effect on growth.  So an obsession with lowering taxes as a panacea remedy for delivering high growth is clearly naïve.  It is just one of several factors that need to be considered and, quite possibly, not even the most important.

Strong growth is clearly possible in countries with high levels of tax.  By the same token, having low tax rates does not guarantee strong growth by any means.

A more balanced view might be simply to say that tax has a limited depressive impact on growth.  For this reason, we could argue that it is better to keep it lower than higher.  But, by the same token, increasing the overall tax burden by 3% or even 4% would not necessarily depress growth.  Growth might even be stimulated if the additional revenues raised were invested wisely.

The idea that lowering tax is a silver bullet for stimulating growth is therefore unsupportable.

About Synchronix

Synchronix is a full-service market research consultancy.  We work with data to generate market insight to drive evidence-based decision making.

We specialise in serving b2b and niche consumer markets in Science, Engineering, Technology, Construction, Distribution, and Agriculture. Understanding and dealing with technical and industrial markets is our forte and this enables us to draw a greater depth of insight than non-specialist agencies.

Take a look at our website to find out more about our services.

Customer chatting over laptop in store

Understanding Customer Experience

Customer experience surveys are widely used in business today.  They are widely regarded as a great way to get invaluable feedback from customers.

But whether you’ve been running a customer survey for years, or whether you’ve never run one, it’s worth reflecting on the benefits they can offer.  And it’s also worth considering how to go about getting the most from them.

Why run a customer experience survey?

Quite simply happy customers are loyal customers and loyal customers deliver more recommendations and more repeat business.

A well-designed customer satisfaction survey will deliver several important business benefits:

  1. By monitoring customer opinion, you’ll have early warning of any potential problems that might cause you to lose revenue further down the line.
  2. It will tell you which improvements will do the most to boost customer loyalty and sales.
  3. It can also tell you what improvements/changes are unlikely to deliver much value.
  4. In short, it prioritises what you need to do to nurture a loyal customer base.

So, when a customer survey is well designed and used effectively, it serves as a positive vehicle for tangible business improvements.

However, it is possible to get it wrong.  And if this happens you might end up with something that delivers limited actionable insight.

So, how do you ensure your customer survey falls into the former category rather than the latter? Here’s a list of pointers I’ve pulled together, based on over 30 years’ experience of designing and running such programs:

Make sure you set the right objectives to start with

Let’s start at the very beginning by looking at the overall business objectives for a customer survey.  Get that wrong and you’ll be lucky to get any real value from the exercise.

The most common reason why such surveys can fail is when they’ve been designed as a box ticking exercise from the very start. If a survey is just used to provide internal reassurance that all is well, it isn’t ever going to serve as an effective agent for change.

Fortunately, this kind of problem is rare.  A more common issue is that sometimes these surveys can be used exclusively in a limited, tactical, way. Here performance scores for each area of the business might be used to directly inform such things as bonuses and performance reviews.  That’s all fine but if this is the only tangible way in which the survey is used, it’s a missed opportunity.

Don’t get me wrong, there’s a value in using surveys to tactically monitor business performance.  But their true value lies in using them to guide business improvement at a strategic level.  If we lose sight of this goal, we won’t ever get the most out of such surveys.

Takeaway:  The primary goal of a customer experience survey should move beyond monitoring performance to providing direction for business improvement.

Using standard templates is just a starting point

Moving on to how we go about designing a survey, the next trap people can fall into is taking the easy option.  By that, I mean running a generic customer survey based on standard templates.

It is easy enough to find standard templates for customer experience surveys online.  Most DIY survey tools will provide them as part of the service.  Standard questions for such things as NPS and generic performance scoring are readily available.

But standard templates are, as the name implies, standard.  They are designed to serve as a handy starting point – not an end in themselves.

There is nothing in any of them that will be unique or specific to any business.  As a result, if you rely purely on a standard template, you’ll only get generic feedback.

That might be helpful up to a point, but to receive specific, actionable, insight from a survey you need to tailor it to collect specific, actionable, feedback from your customers.  And that means you need to ask questions about issues that are specific to your business, not any business.

Takeaway:  Only ever use standard templates as a starting point.  Always tailor customer experience surveys to the specific needs of your business.

Avoid vague measures, focus on actionable ones

It may sound obvious, but it’s important to make sure you are measuring aspects of business performance that are clearly defined and meaningful.  That means it needs to be specific, so there is no confusion over what it might or might not mean when you come to look at the results.

Leaving these definitions too broad or too generic can make it very hard to interpret the feedback you get.

Let’s take an example – ‘quality’.  What exactly does that mean?  It might mean slightly different things in different industries.  And it might mean different things to different people, even within the same organisation.

If your product is machinery, product quality could refer to reliability and its ability to run with minimal downtime.  However, it might also relate to the quality of work the machine produces.  Or perhaps, under certain circumstances, it might refer more to accuracy and precision?  When you think about it, ‘quality’ could encompass a range of different things.

To avoid potentially confusing outcomes of this sort you need to use more specific phrasing.  That way, when you identify an area that needs improvement, it’s clear what needs to be done.

Takeaway:  Ensure you’re testing specific measures of business performance. 

Always Provide a mechanism for open feedback

Not everyone will answer open-ended questions by any means.  Indeed, surveys can fall into the trap of asking too many, leading to a poor response.

However, one or two well targeted open questions will provide invaluable feedback.  It is a golden opportunity to pick up on issues and opportunities for improvement that you haven’t thought of, but which your customers have!

Takeaway: Always include one or two well targeted open questions to elicit feedback from customers.  But don’t add too many or response rates will suffer, and the quality of answers will be diluted.

Ensuring insight is actionable

Of course, you might already have a customer experience survey.  Perhaps it has been running for years.  If it is delivering good value then happy days.  However, that’s not always the case.

Sometimes people find that the outputs from an existing customer experience survey are not particularly actionable.  If that is the case, then it’s a clear warning sign you’re doing something wrong.

There are only two reasons why this ever happens:

1st reason:   Senior management in the business are incapable of driving positive change, even if they are provided with clear direction as to what they should be doing.

2nd reason:  The survey was poorly designed in the first place and is unlikely to ever deliver anything actionable.

Unfortunately, the first of these problems can’t be solved by a survey or any other form of market insight come to that!  But it is possible to do something about the latter!

The answer is simple – you need to redesign your customer experience survey.  Don’t keep re-running it and repeating the same old mistakes.

Takeaway: If your customer experience survey is not delivering actionable insight, stop running it.  You need to either re-design it or save your money and not bother!

Legacy questions and survey bloat

Has your customer survey been running for several years now?  Does the following pattern sound familiar?

  • Every year, the previous year’s questionnaire gets circulated to survey stakeholders for feedback.
  • Each stakeholder comes back with feedback that involves adding new questions, but they don’t often suggest taking any of the old questions away.
  • Some of the new questions (perhaps all) relate to some very specific departmental initiatives.
  • The questionnaire gets longer.
  • The response rate goes down as a result.
  • A year goes by and it may not be entirely clear what has been done with the outputs of some of these questions.
  • The process repeats itself….

Of course, there is a benefit in maintaining consistency.  However, there’s little point measuring things that are no longer relevant for the business.

It may well be time for a more fundamental review. 

Maybe even consider going back to square one and running some qualitative research with customers. Could you be missing something vitally important that a few open conversations with customers could reveal?

Alternatively, maybe you need to run some internal workshops.  How well do current priorities really align with legacy questions in the survey?

Takeaway: If you think your customer survey has become overly bloated with legacy questions, don’t shy away from carrying out a full review.  

About Us

Synchronix Research offers a full range of market research services, polling services and market research training.  We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.

For any questions or enquiries, please email us: info@synchronixresearch.com

You can read more about us on our website.  

You can catch up with our past blog articles here.

Image of polling results

Just how accurate are Opinion Polls?

Just after the Brexit referendum result became known, in late June 2016, several newspapers ran stories on how the opinion polls “got it wrong”.

Typical of these was an article in the Guardian from 24th June 2016, with the headline “How the pollsters got it wrong on the EU referendum.”  In it the journalist observed:

“Of 168 polls carried out since the EU referendum wording was decided last September, fewer than a third (55 in all) predicted a leave vote.”

Of course, this is neither the first nor the last time pollsters have come in for some criticism from the media.  (Not that it seems to stop journalists writing articles about opinion polls of course).

But sensationalism aside, how accurate are polls?  In this article, I’ll explore how close (or far away) the polls came to predicting the Brexit result.  And what lessons we might draw from this for the future.

The Brexit Result

On 23rd June 2016, the UK voted by a very narrow margin (51.9% to 48.1%) in favour of Brexit.  However, if we just look at polls conducted near to the referendum, the general pattern was to predict a narrow result.  In that respect the polls were accurate. 

Taking an average of all these polls, the pattern for June showed an almost 50/50 split, with a slight edge in favour of the leave vote.  So, polls taken near to the referendum predicted a narrow result (which it was) and, if averaged, just about predicted a leave result (which happened).

Chart of brexit vote v. result polls

To compare the predictions with the results, I’ve excluded people who were ‘undecided’ at the time of the surveys.  Since anyone still ‘undecided’ on the day would presumably not have voted at all.

Of course, the polls did not get it spot on.  But that is because we are dealing with samples.  Samples always have a margin of error, so cannot be expected to be spot on.

Margin of error

The average sample size of polls run during this period was 1,799 (some had sample sizes as low as 800; others, several thousands).  However, on a sample size of 1,799 a 50/50 result would have a margin of error of +/- 2.3%.  That means if such a poll predicts 50% of people were going to vote leave, we could be 95% confident that between 48% and 52% would vote leave.

In the end, the average of all these polls came to within 1.7% of predicting the exact result.  That’s close!  It’s certainly within the margin we’d expect.

You might wonder why polls don’t use bigger samples to improve the margin?  If a result looks like being close, you’d think it might be worth using a large enough sample to reduce the error margin. 

Why not, for example, use a sample big enough to reduce the statistical error margin to 0.2% – a level that would provide a very accurate prediction?  To achieve that you’d need a sample of around 240,000!  That’s a survey costing a whopping 133 times more than the typical poll!  And that’s a cost people who commission polls would be unwilling to bear.

Data Collection

Not all polls are conducted in the same way, however. Different pollsters have different views as to the best ways to sample and weight their data.  Most of these differences are minor and all reflect the pollster’s experience of what approaches have delivered the most accurate results in the past.  Taking a basket of several polls together would create a prediction more likely to iron out any outliers or odd results resulting from such differences.

However, there is one respect in which polls fall into two potentially very different camps when it comes to methodology.  Some are conducted online, using self-completed surveys, where the sample is drawn from online consumer panels.  Others are conducted by telephone, using randomly selected telephone sample lists.

Both have their potential advantages and disadvantages:

  • Online: not everyone is online and not everyone is easy to contact online.  In particular older people might be less likely to use the internet that often.  So, any online sample would under-represent people with limited internet access.
  • Telephone:  not everyone is accessible by phone.  Many of these sample lists may be better in terms of reaching out to people with landlines than mobiles.  That might make it difficult to access some younger people who have no landline, or people using the telephone preference service.

But, that said, do these potential gaps make any difference?

Online vs Telephone Polling

So, returning to the Brexit result, is there any evidence to suggest either methodology provides a more accurate result?

chart of brexit result v. online and telephone polls

A simple comparison between the results predicted by the online polls vs the telephone polls conducted immediately prior to the referendum reveals the following:

  • Telephone polls: Overall, the average for these polls predicted a 51% majority in favour of remain.
  • Online polls: Overall, the average for these polls predicted a win for the leave vote by 50.5% (in fact it was 51.9%)

On the surface of things, the online polls appear to provide the more accurate prediction.  However, it’s not quite that simple.

Online polls are cheaper to conduct than telephone polls.  As a result, online polls can often afford to use larger samples.  This reduces the level of statistical error.  In the run up to the referendum the average online poll used a sample of 2,406 vs. an average of 1,038 for telephone polls.

The greater accuracy of the online polls over this period could therefore be largely explained simply by the fact that they were able to use larger samples.  As telephone is a more expensive medium, it is undeniably easier to achieve a larger sampler via the online route.

Accuracy over time

You might expect that, as people get nearer to the time of an election, they are more likely to come to a decision as to how they will vote.

However, our basket of polls in the month leading up to the Brexit vote shows no signs of the level of people who were ‘undecided’ changing.  During the early part of the month around 10% consistently stated they had not decided.  Closer to the referendum, this number remained much the same.

However, when we look at polls conducted in early June vs polls conducted later, we see an interesting contrast.  As it turns out, polls conducted early in June predicted a result closer to the actual result than those conducted closer to the referendum.

In fact, it seems that polls detected a shift in opinion that seems to have occurred around the time of the assassination of the MP, Jo Cox.

Chart of brexit result v polls taken before and after the killing of Jo Cox

Clearly, the average for the early month polls predicts a result very close to the final one.  The basket of later polls however, despite the advantage of larger samples, are off the mark by a significant margin.  It is these later polls that reinforced the impression in some people’s minds that the country was likely to vote Remain.

But why?

Reasons for mis-prediction

Of course, it is difficult to explain why surveys seemed to show a result that was a little way off the final numbers so close to the event. 

If we look at opinion surveys conducted several months before the referendum, then differences become easier to explain.  People change their minds over time and other people who are wavering will make up their minds.

A referendum conducted in January 2016 would have delivered a slightly different result to the one in June 2016.  This would be purely because a slightly different mix of people would have voted.  Also, because some people would have held a different opinion in January to that which they held in June.

However, by June 2016, you’d expect that a great many people would have made up their minds.

Logically, however, there are four reasons I can think of as to why there might be a mis-prediction by polls conducted during this period:

  1. Explainable statistical error margins.
  2. Unrepresentative approaches.
  3. Expressed intentions did not match actual behaviour.
  4. “Opinion Magnification”.

Explainable statistical error margins

Given the close nature of the vote, this is certainly a factor.  Polls of the size typically used here would find it very difficult to precisely predict a near 50/50 split. 

51.9% voted Leave.  A poll of 2000 could easily have predicted 49.7% (a narrow reverse result) and still be within an acceptable statistical error margin. 

18 of the 31 polls (58%) conducted in June 2016 returned results within the expected margin of statistical error vs the final result.  If they got the result wrong (which 3 did), this can be explained purely by the fact that the sample size was not big enough.

However, this means that 13 returned results that can’t be accounted for by expected statistical error alone. 

If we look at surveys conducted in early June, 6 returned results outside the expected bounds of statistical variance.  However, this was usually not significantly outside those bounds (just 0.27% on average). 

The same cannot be said of surveys conducted later in June.  Here polls were getting the prediction wrong by an average of 1.28% beyond the expected range.  All the surveys (7 in total) that predicted a result outside of the expected statistical range, consistently predicted a Remain win.

This is too much of a coincidence.  Something other than simple statistical error must have been at play.

Unrepresentative approaches

Not everyone is willing (or able) to answer Opinion Polls. 

Sometimes a sample will contain biases.  People without landlines would be harder to reach for a telephone survey.  People who never or rarely go online will be less likely to complete online surveys.

These days many pollsters make a point of promising a ‘quick turnaround’.  Some will boast that they can complete a poll of 2,000 interviews online in a single day.  That kind of turnaround is great news for a fast-paced media world but will under-represent infrequent internet users.

ONS figures for 2016 showed that regular internet use was virtually universal amongst the under 55s.  However, 12% of 55–64-year-olds, 26.9% of 65–74-year-olds and 61.3% of the over 75s had not been online for three months in June 2016. Older people were more likely to vote Leave.  But were the older people who don’t go online more likely to have voted Leave than those who do?

It is hard to measure the effect of such biases.  Was there anything about those who could not / would not answer a survey that means they would have answered differently?  Do they hold different opinions?

However, such biases won’t explain why the surveys of early June proved far better at predicting the result than those undertaken closer to the vote. 

Expressed intention does not match behaviour

Sometimes, what people do and what they say are two different things.  This probably doesn’t apply to most people.  However, we all know there are a few people who are unreliable.  They say they will do one thing and then go ahead and do the opposite.

Also, it is only human to change your mind.  Someone who planned to vote Remain in April, might have voted Leave on the day.  Someone undecided in early June, may have voted Leave on the day.  And some would switch in the other direction.

Without being able to link a survey answer to an actual vote, there is no way to test the extent to which peoples stated intentions fail to match their actual behaviour.

However, again, this kind of switching does not adequately explain the odd phenomenon we see in June polling.  How likely is it that people who planned to vote Leave in early June, switched to Remain later in the month and then switched back to Leave at the very last minute?  A few people maybe, but to explain the pattern we see, it would have to have been something like 400,000 people.  That seems very unlikely.

The Assassination of Jo Cox

This brings us back to the key event on 16 June – the assassination of Jo Cox.  Jo was a labour politician who strongly supported the Remain campaign and was a well-known champion of ethnic diversity.  Her assassin was a right-wing extremist who held virulently anti-immigration views.

A significant proportion of Leave campaigners cited better immigration control as a key benefit of leaving the EU.  Jo’s assassin was drawn from the most extremist fringe of such politics.

The boost seen in the Remain vote recorded in the polls that followed her death were attributed at the time to a backlash against the assassination.  That some people, shocked by the implications of the incident, were persuaded to vote Remain.  Doing so might be seen by some as an active rejection of the kind of extreme right-wing politics espoused by Jo’s murderer.

At the time it seemed a logical explanation.  But as we now know, it turned out not to be the case on the day.

Reluctant Advocates

There will be some people who will, by natural inclination, keep their voting intentions secret. 

Such people are rarely willing to express their views in polls, on social media, or even in conversation with friends and family.  In effect they are Reluctant Advocates.  They might support a cause but are unwilling to speak out in favour of it.  They simply don’t like drawing attention to themselves.

There is no reason to suspect that this relatively small minority would necessarily be skewed any more or less to Leave or Remain than everyone else.  So, in the final analysis, it is likely that the Leave and Remain voters among them will cancel each other out. 

The characteristic they share is a reluctance to make their views public.  However, the views they hold beyond this are not necessarily any different from most of the population.

An incident such as the assassination of Jo Cox can have one of two effects on public opinion (indeed it can have both):

  • It can prompt a shift in public opinion which, given the result, we now know did not happen.
  • It can prompt Reluctant Advocates to become vocal, resulting in a phenomenon we might call Opinion Magnification.

Opinion Magnification

Opinion Magnification creates the illusion that public opinion has changed or shifted to a greater extent than it actually has.  This will not only be detected in Opinion Polls but also in social media chatter – indeed via any media through which opinion can be voiced.

The theory being that the assassination of Jo Cox shocked Remain supporting Reluctant Advocates into becoming more vocal.  By contrast, it would have the opposite effect on Leave supporting Reluctant Advocates. 

The vast majority of Leave voters would clearly not have held the kind of extremist views espoused by Jo’s assassin.  Indeed, most would have been shocked and would naturally have tried to distance themselves from the views of the assassin as much as possible.  This fuelled the instincts of Leave voting Reluctant Advocates to keep a low profile and discouraged them from sharing their views.

If this theory is correct, this would explain the slight uplift in the apparent Remain vote in the polls.  This artificial uplift, or magnification, of Remain supporting opinion would not have occurred were it not for the trigger event of 16 June 2016.

Of course, it is very difficult to prove that this is what actually occurred.  However, it does appear to be the only explanation that fits the pattern we see in the polls during June 2016.

Conclusions

Given the close result of the 2016 referendum, it was always going to be a tough prediction for pollsters.  Most polls will only be accurate to around +/- 2% anyway, so it was ever a knife edge call.

However, in this case, in the days leading up to the vote, polls were not just out by around 2% in a few cases.  They were out by a factor of around 3%, on average, predicting a result that was the reverse of the actual outcome.

Neither statistical error, potential biases nor any disconnect between stated and actual voting behaviour can adequately account for the pattern we saw in the polls. 

A more credible explanation is distortion by Opinion Magnification prompted by an extraordinary event.  However, as the polling average shifted no more than 2-3%, the potential impact of this phenomenon appears to be quite limited.  Indeed, in a less closely contended vote, it would probably not have mattered at all.

Importantly, all this does not mean that polls should be junked.  But it does mean that they should not be viewed as gospel.  It also means that pollsters and journalists need to be alert for future Opinion Magnification events when interpreting polling results.

About Us

Synchronix Research offers a full range of market research services, polling services and market research training.  We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.

For any questions or enquiries, please email us: info@synchronixresearch.com

You can read more about us on our website.  

You can catch up with our past blog articles here.

Sources, references & further reading:

How the pollsters got it wrong on the EU referendum, Guardian 24 June 2016

ONS data on internet users in the UK

Polling results from Opinion Polls conducted prior to the referendum as collated on Wikipedia

FiveThirtyEight – for Nate Silver’s views on polling accuracy

Inputting credit card data onto a laptop

Working with Digital Data Part 2 – Observational data

One of the most important changes brought about by the digital age is the availability of observational data.  By this I mean data that relates to an observation of actual online consumer behaviour.  A good example would be in tracing the journey a customer takes when buying a product.

Of course, we can also find a lot of online data relating to attitudes and opinions but that is less revolutionary.  Market Research has been able to provide a wealth of that kind of data, more reliably, for decades.

Observational data is different – it tells us about what people actually do, not what they think (or what they think they do).  This kind of behavioural information was historically very difficult to get at any kind of scale without spending a fortune.  Not so now.

In my earlier piece I had a look at attitudinal and sentiment related digital data.  In this piece I want to focus on observational behavioural data, exploring its power and its limitations.

Memory vs reality

I remember, back in the 90s and early 2000s, it was not uncommon to be asked to design market research surveys aimed at measuring actual behaviour (as opposed to attitudes and opinions). 

Such surveys might aim to establish things like how much people were spending on clothes in a week, or how many times they visited a particular type of retail outlet in a month, etc.  This kind of research was problematic.  The problem lay with people’s memories.  Some people can recall their past behaviour with exceptional accuracy.  However, others literally can’t remember what they did yesterday, let alone recall their shopping habits over the past week.

The resulting data only ever gave an approximate view of what was happening BUT it was certainly better than nothing.  And, for a long time, ‘nothing’ was usually the only alternative.

But now observational data, collected in our brave new digital world, goes some way to solving this old problem (at least in relation to the online world).  We can now know for sure the data we’re looking at reflects actual real-world consumer behaviour, uncorrupted by poor memory.

Silver Bullets

Alas, we humans are indeed a predictable lot.  New technology often comes to be regarded as a silver bullet.  Having access to a wealth of digital data is great – but we still should not automatically expect it to provide us with all the answers.

Observational data represents real behaviour, so that’s a good starting point.  However, even this can be misinterpreted.  It can also be flawed, incomplete or even misleading.

There are several pitfalls we ought to be mindful of when using observational data.  If we keep these in mind, we can avoid jumping to incorrect conclusions.  And, of course, if we avoid drawing incorrect conclusions, we avoid making poor decisions.

Correlation in data is not causation

It may be an old adage in statistics, but it is even more relevant today, than ever before.  For my money, Nate Silver hit the nail on the head:

“Ice cream sales and forest fires are correlated because both occur more often in the summer heat. But there is no causation; you don’t light a patch of the Montana brush on fire when you buy a pint of Haagan-Dazs.”

[Nate Silver]

Finding a relationship in data is exciting.  It promises insight.  But, before jumping to conclusions, it is worth taking a step back and asking if the relationship we found could be explained by other factors.  Perhaps something we have not measured may turn out to be the key driver.

Seasonality is a good example.  Did our sales of Christmas decorations go up because of our seasonal ad-campaign or because of the time of year?  If our products are impacted by seasonality, then our sales will go up at peak season but so will those of our competitors.  So perhaps we need to look at how market share has changed, rather than basic sales numbers, to see the real impact of our ad campaign.

Unrepresentative Data

Early work with HRT seemed to suggest that women on HRT were less susceptible to heart disease than other women.  This was based on a large amount of observed data.  Some theorised that HRT treatments might help prevent heart disease. 

The data was real enough.  Women who were on HRT did experience less heart disease than other women.

But the conclusion was utterly wrong.

The problem was that, in the early years of HRT, women who accessed the treatment were not representative of all women. 

As it turned out they were significantly wealthier than average.  Wealthier women tend to have access to better healthcare, eat healthier diets and are less likely to be obese.  Factors such as these explained their reduced levels of heart disease, not the fact that they were on HRT.

Whilst the completeness of digital data sets is improving all the time, we still often find ourselves working with incomplete data.  Then it is always prudent to ask – is there anything we’re missing that might explain the patterns we are seeing?

Online vs Offline

Naturally, digital data is a measure of life in the online world.  For some brands this will give full visibility of their market since all, or mostly all, of their customers primarily engage with them online.

However, some brands have a complex mix of online and offline interactions with customers.  As such it is often the case that far more data exists in relation to online behaviour than to offline.  The danger is that offline behaviour is ignored or misunderstood because too much is being inferred from data collected online.

This carries a real risk of data myopia, leading to us becoming dangerously over-reliant on insights gleaned from an essentially unrepresentative data set. 

Inferring influence from association

Put simply – do our peers influence our behaviour?  Or do we select our peers because their behaviour matches ours?

Anna goes to the gym regularly and so do most of her friends.  Let’s assume both statements are based on valid observation of their behaviour.

Given such a pattern of behaviour it might be tempting to conclude that Anna is being influenced by ‘herd mentality’. 

But is she? 

Perhaps she chose her friends because they shared similar interests in the first place, such as going to the gym? 

Perhaps they are her friends because she met them at the gym?

To identify the actual influence, we need to understand the full context.  Just because we can observe a certain pattern of behaviour does not necessarily tell us why that pattern exists.  And if we don’t understand why a certain pattern of behaviour exists, we cannot accurately predict how it might change.

Learning from past experiences

Observational data measures past behaviour.  This includes very recent past behaviour of course (which is part of what makes it so useful).  Whilst this is a useful predictor of future behaviour, especially in the short term, it is not guaranteed.  Indeed, in some situations, it might be next to useless. 

But why?

The fact is that people (and therefore markets) learn from their past behaviour.  If past behaviour leads to an undesirable outcome they will likely behave differently when confronted with a similar situation in future.  They will only repeat past behaviour if the outcome was perceived to be beneficial.

It is therefore useful to consider the outcomes of past behaviour in this light.  If you can be reasonably sure that you are delivering high customer satisfaction, then it is less likely that behaviour will change in future.  However, if satisfaction is poor, then there is every reason to expect that past behaviour is unlikely to be repeated. 

If I know I’m being watched…

How data is collected can be an important consideration.  People are increasingly aware their data is being collected and used for marketing purposes.  The awareness of ‘being watched’ in this way can influence future behaviour.  Some people will respond differently and take more steps than others to hide their data.

Whose data is being hidden?  Who is modifying their behaviour to mitigate privacy concerns?  Who is using proxy servers?  These questions will become increasingly pressing as the use of data collected digitally continues to evolve.  Will a technically savvy group of consumers emerge who increasingly mask their online behaviour?  And how significant will this group become?  And how different will their behaviour be to that of the wider online community?

This could create issues with representativeness in the data sets we are collecting.  It may even lead to groups of consumers avoiding engagement with brands that they feel are too intrusive.  Could our thirst for data, in and of itself, put some customers off?  In certain circumstances – certainly yes.  This is already happening.  I certainly avoid interacting with websites with too many ads popping up all over the place.  If a large ad pops up at the top of the screen, obscuring nearly half the page, I click away from the site immediately.  Life is way too short to put up with that annoying nonsense.

Understanding why

By observing behaviour, we can see, often very precisely, what is happening.  However, we can only seek to deduce why it is happening from what we can see. 

We might know that person X saw digital advert Y on site Z and clicked through to our website and bought our product.  Those are facts. 

But why did that happen?

Perhaps the advert was directly responsible for the sale.  Or perhaps person B recommended your product to person X in the bar, the night before.  Person X then sees your ad the next day and clicks on it.  However, the truth is that the ad only played a secondary role in selling the product – an offline recommendation was key.  Unfortunately, the key interaction occurred offline, so it remained unobserved.

Sometimes the only way to find out why someone behaved in a certain way is to ask them.

Predicting the future

Forecasting the future for existing products using observational data is a sound approach, especially when looking at the short-term future.

Where it can become more problematic is when looking at the longer term.  Market conditions may change, competitors can launch new offerings, fashions shift etc.  And, if we are looking to launch a new product or introduce a new service, we won’t have any data (in the initial instance) that we can use to make any solid predictions.

The question we are effectively asking is about how people will behave and has little to do with how they are behaving today.  If we are looking at a truly ground-breaking new concept then information on past behaviour, however complete and accurate, might well be of little use.

So, in some circumstances, the most accurate way to discover likely future behaviour is to ask people.  What we are trying to do is to understand attitudes, opinions, and preferences as they pertain to an (as yet) hypothetical future scenario.

False starts in data

One problematic area for digital marketing (or indeed all marketing) campaigns is false starts.  AI tools are improving in their sophistication all the time.  However, they all work in a similar way:

  • The AI is provided with details of the target audience.
  • The AI starts with an initial experiment,
  • It observes the results,
  • Then it modifies the approach based on what it learns. 
  • The learning process is iterative, so the longer a campaign runs, the more the AI learns, the more effective it becomes.

However, how does the AI know what target audience it should aim for in the initial instance?  In many cases the digital marketing agency determines that based on the client brief.  That brief is usually written by a human which should (ideally) provide a clear answer to the question “what is my target market?”

That tells the Agency and, ultimately, the AI, who it should aim for.

However, many people, unfortunately, confuse the question “what is my target market?” with “what would I like my target market to be in an ideal world?”  This is clearly a problem and can lead to a false start.

A false start is where, at the start of a marketing campaign, the agency is effectively told to target the wrong people.  Therefore, the AI starts by targeting the wrong people and has a lot of learning to do!

A solid understanding of the target market in the first instance can make all the difference between success and failure.

Balancing data inputs

The future will, no doubt, provide us with access to an increased volume, variety, and better-quality digital data.   New tools, such as AI, will help make better sense of this data and put it to work more effectively.  The digital revolution is far from over.

But how, when, and why should we rely on such data to guide our decisions?  And what role should market research (based on asking people questions rather than observing behaviour) play?

Horses for courses

The truth is that observed data acquired digitally is clearly better than market research for certain things. 

Most obviously, it is better at measuring actual behaviour and using it for short-term targeting and forecasting. 

It is also, under the right circumstances, possible to acquire it in much greater (and hence statistically reliable) quantity.  Crucially (as a rule) it is possible to acquire a large amount of data relatively inexpensively, compared to a market research study.

However, when we are talking about observed historic data it is better at telling us ‘what’, ‘when’ and ‘how’ than it is at telling us ‘why’ or ‘what next’.  We can only look to deduce the ‘whys’ and the ‘what next’ from the data.  In essence it measures behaviour very well, but determines opinion, as well as potential shifts in future intention, poorly. 

The role of market research

Question based market research surveys are (or at least should be) based on structured, representative samples.  It can be used to fill in the gaps we can’t get from digital data – in particular it measures opinion very well and is often better equipped to answer the ‘why’ and ‘what next’ questions than observed data (or attitudinal digital data). 

Where market research surveys will struggle is in measuring detailed past behaviour accurately (due to the limitations of human memory), even if it can measure it approximately. 

The only reason for using market research to measure behaviour now is to provide an approximate measure that can be linked to opinion related questions measured on the same survey.  To be able to tie in the ‘why’ with the ‘what’

Thus, market research can tell us how the opinions of people who regularly buy products in a particular category are different from less frequent buyers.  Digital data can usually tell us, more accurately who has bought what and when – but that data is often not linked to attitudinal data that explains why.

Getting the best of both data worlds

Obviously, it does not need to be an either/or question.  The best insight comes from using digital data in combination with a market research survey.

With a good understanding of the strengths and weaknesses of both approaches it is possible to obtain invaluable insight to support business decisions.

About Us

Synchronix Research offers a full range market research services and market research training.  We can also provide technical content writing services.

You can read more about us on our website.  

You can catch up with our past blog articles here.

If you like to get in touch, please email us.

Sources, references & further reading:

Observational Data Has Problems. Are Researchers Aware of Them? GreenBook Blog, Ray Poynter, October 2020

UK Elections 2021 – How is the political landscape changing?

How is the political landscape changing? As the dust settles on the May 2021 elections, it is worth taking a closer look at the results to see what they might tell us.   

England

The overall results for labour have been bad across the English elections.  Labour has lost seats across many areas and, at the same time, the Tories have picked seats up.

Overall, the Tories have increased their number of councillors in contested areas by + 11%, labour have declined by -20%. 

The LibDems remain the third largest party but have seen little real change.

Other important highlights are that UKIP has now disappeared from the political scene and Reform has failed to hoover up those old seats.  The main beneficiary from the demise of UKIP has clearly been the Tories. 

There has also been a dramatic increase in the amount of Green councillors (more than doubling their number of councillors in contested areas to 151). 

One final important highlight is the fact that there have been gains across the board for a mix of independents (an 18% increase to 255 councillors).

Labour’s highest profile loss was, of course, Hartlepool.  However, here, the story has more to it than meets the eye.

Hartlepool

In Hartlepool the Tories saw their vote increase from 28.9% at the last general election to 51.9% on May 6th.   Much of this gain is likely the result of the disappearance of the Brexit Party as a meaningful political force.  25.8% voted BP in 2019 which, if added to the Tory vote at that time, would total 54.7% – similar to the Tory vote this time around.

Whilst this may explain the Tory win, it does not explain the reduction in the Labour vote (falling from 37.7% in the last general election to 28.7%).  Smaller parties like the Greens may have taken votes from Labour but as the Greens only accounted for 1.2% of the vote, this can hardly explain it.

One point to remember is that the incumbent MP was forced to leave office because of allegations of sexual harassment and victimization.  This may have served to turn some voters away from Labour – but the question remains that whatever their reasons were for not voting Labour, who did those voters turn to?

A big factor appears to have been an independent candidate – Sam Lee, a local businesswoman.   Sam positioned herself as someone who stood up for the local business community and a Westminster outsider.  A vote for her, she claimed, would “show politicians that we are sick of their party games and empty promises”. A vote for her then, was, in many ways, a rejection of the status quo.  Sam polled 9.7% of the vote and, as she didn’t stand in 2019, it looks like she may have taken a fair number of votes away from Labour.

No change..?

So, in 2021, it may be that Hartlepool saw no real significant switch from Labour to Tory at all – that had already happened in 2019, when large numbers of voters changed to the Brexit Party.  And having switched to the BP, the move to voting Tory seems to have been an easy step for many. 

The vote for Sam Lee is significant though.  It shows a considerable number of people prepared to vote for someone outside the political establishment, and a desire amongst many for something quite different from the established parties.

The Red Wall weakens in the North and Midlands

In general, results in the North and Midlands have shown the biggest Tory gains plus the most serious Labour losses.

Again, the explanation seems to lie mainly with picking up former Brexit Party voters rather than outright direct conversion of 2019 Labour voters. 

The biggest Tory gains compared with previous local elections were in Yorkshire and Humberside (+11.2% up), the West Midlands (+9.7%) and the North East (+7.3%).

These marry up with the more significant Labour losses – Yorkshire and Humberside (-4.5%), the West Midlands (-5%) and the North East (-4%).

Labour losses and Tory gains were less significant elsewhere in England.

So, are we witnessing a sea-change in voting patterns in the North driven by regional factors or is it something more complicated than this? 

It is true that the so-called Red-Wall has clearly been seriously eroded in many parts of the North.  However, Labour has performed well in the area in certain large cities.  Could it be that this is more about how voting patterns are changing in metropolitan v non-metropolitan areas, than it is about changing attitudes in the North?

The Metropolitan Effect

Labour has performed well in northern metropolitan areas such as Liverpool and Manchester, showing that it can hold its own there under the right conditions.

In Manchester, Labour even gained ground.  Perhaps this was due in no small part to the charismatic Andy Burnham but the numbers tell a convincing tale.

Labour increased its share of the vote on the first choice for Mayor from 63.4% in 2017 to 67.3% in 2021. The Tories slipped from 22.7% to 19.6%.  Here, the lesser parties were very much out of the picture.

The Labour Mayoral vote also held strong in Liverpool.  No sign of any cracks in the Red Wall in these major northern cities; a quite different story from the story we see in less urban areas. 

So why is the metropolitan vote in the North so different from the trends we see elsewhere?

The Role of ‘Englishness’

Will Jennings, Professor of Political Science and Public Policy at Southampton University, feels that the migration of voters to the Brexit Party and then to the Tories has much to do with the emergence of a strong English national identity.  This tends to view the Tories as a party that is positive about the English and Labour as essentially mediocre about, or even hostile to, an English cultural identity.

Evidence for this can be found in BSA research that looked at the motives behind voting Leave/Remain in the Brexit vote.  This found that people who identified themselves as ‘British’ and not ‘English’ in England, voted 62% in favour of Remain.  However, 72% of people who identified themselves as ‘English’ and not ‘British’, voted in favour of Leave.

This sentiment, Jennings would argue, has translated into a vote for the Brexit Party in 2019 and has now converted into a Tory vote.  Parts of the North which have switched to Tory are often areas where this sense of ‘Englishness’ is strongest.

However, cities such as Manchester and Liverpool are more cosmopolitan in character and have strong and distinct local identities (as Mancunians or Scouse).  As a result, the tendency to strongly identify with an ‘English’ nationalist identity is less evident.  This in turn translates into a much-reduced willingness to switch away from Labour to the BP or Tories.

Treating the ‘North’ as a single homogenous area would therefore appear to be a gross over-simplification.

A different picture in Southern England

In the South, there was less dramatic change in voting patterns.  Although we saw some shift to the Tories in the council elections, the change was nowhere near as significant or dramatic as that seen in political landscape in the North.

However, there are a couple of interesting results that are worth pulling out – both Labour Mayoral wins.

The first is the result for Cambridge and Peterborough.   On the first choice alone, the Tories would have won (Tory 41%, Labour 33%, LibDems 27%).  However, once the LibDems were knocked out of the picture the second-choice votes for these voters were overwhelmingly Labour.  The result enabled Labour to win (just) by 51%. 

The second result is for the West of England Mayor (which covers Bristol, Bath and North East Somerset and South Gloucestershire).

Here Labour increased its vote from 22.2% to 33.4% in the first round.  The Tories also actually did a little better (increasing from 27.3% to 28.6%).  The LibDems, again, saw limited but negative change (20.2% down to 16.3%) and the Greens again, saw progress (up to 21.7% from 11.2%). 

Again it is worth noting that the presence of a strong independent candidate can affect the results.  In 2017 such a candidate polled 15% of the vote but this time around, no such candidate stood.

This does raise the possibility that a future cooperative arrangement between Labour, Greens and LibDems in the south could potentially cause significant damage to the Tories in some parts of the southern political landscape.  However distant and unlikely such a prospect might seem today.

What about Scotland?

The results in Scotland, of course, have been quite different from anything we see in England.

Here we have seen the SNP make modest progress – increasing their share of the vote from 46.5% of constituency votes at the last parliamentary election in 2016 to 47.7% now.  The Tories saw little change in fortune (21.9% share now vs 22% in 2016).  Labour, too, saw limited change (21.6% down from 22.6%).

The SNP have consolidated and built on their dominant position even if they have not achieved an outright majority.  Some have suggested that they owe their electoral success at least in part to the general perception that Nicola Sturgeon has handled the Covid crisis well. 

One might make a similar observation across the UK.  This is that the light beckoning at the end of the Covid tunnel tends to favour the incumbent administrations – the SNP in Scotland and the Tories in England.  There is no doubt some truth in this and, if so, we can see this pattern repeated in Wales.

What about Wales?

Wales bucked the pro-Tory trend we see in England.  Here comparisons with England are more interesting because Wales, like England, voted Leave (whereas Scotland did not).  However, UKIP and latterly the Brexit Party have never been quite the force in Wales that they were in many parts of England (the Brexit Party registered only 5% of the Welsh vote in the 2019 election). 

Here the Tories have not managed to benefit anywhere near so much from picking up former UKIP or Brexit Party voters.  In 2016 the Tories got 21.1% of the constituency vote, which they have been able to increase to 25.1% this time around.  This no doubt reflects picking up some of the old UKIP votes (which accounted for 12.5% of the votes in the 2016 assembly election).

However, in Wales Labour have increased their share of the vote from 34.7% to 39.9%. Plaid Cymru have remained at pretty much the same level (20.7% vs 20.5% last time).

As with elsewhere, it may well be that the incumbent administration is benefiting from the feeling that we are headed in the right direction Covid-wise. 

The lack of the BP/UKIP factor in Wales in the political landscape meant there were only a limited number of these voters for the Tories to potentially pick up.  This supports Professor Jennings’ view that it is the sense of Englishness that has driven a migration of votes from labour, via UKIP and the Brexit Party, to the Tories.  The absence of the ‘Englishness’ factor in Wales potentially explains why such a pattern has not been repeated here.

In conclusion

It is probably worth concluding by saying that we ought to be very careful in what we read into these results.  The 2021 elections have occurred at a time when so much is in a state of flux.  The Covid crisis makes these times most unusual indeed. 

In a few years’ time when (hopefully) Covid no longer dominates our lives, we will be living in a vastly different world.   Also, we cannot yet say what the longer-term impacts of Brexit may be.  We are also only at the very beginning of the Tory levelling-up agenda.  Much has been promised, but what will be delivered?

This election has highlighted some important emerging trends, but the events of the next few years could yet see things change quite radically.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future.  That means understanding change – whether that be changes in technology, culture, attitudes or behaviour. 

We offer market research services, opinion polling and content creation services.  You can read more about this on our website.  

If you wish to follow our weekly blog you can view all out past articles on our website here.

Sources

Election Results from BBC England

BBC Scottish Election Results

Welsh Election Results from the BBC

Sky News Election Takeaways

BSA

Hold the Front Page

Covid in Numbers – why have some countries suffered more than others?

As vaccinations roll out, we are beginning to see some light at the end of the covid pandemic tunnel.  It will take a few months yet, but it seems almost unreal to think that by the end of 2021 we may finally be back to some kind of post-pandemic normality.

Now seems like an appropriate time to take stock.  What might we learn from the traumatic events of the past year?  We might ask ourselves the question – why is it that some countries appear to have faired so much worse with Covid than others?  How have some countries experienced relatively low death rates, whereas others have experienced such tragically high numbers?

The Worst Hit

If we take a look at the numbers, the worst hit of the larger countries include many European nations (eight from the top ten worst affected) as well as the USA and Mexico.  All ten have experienced more than 150 deaths per 100,000 population.  The worst affected at the time of writing is the Czech Republic, with over 230 deaths per 100,000.

Other countries have escaped relatively lightly.  Amongst the other European nations Germany has suffered significantly less – ie, experienced a death rate less than half that of countries like the UK, Belgium and Hungary.

Healthcare Quality

One thing we might look at is the quality of healthcare.  More developed countries generally have more established, advanced and comprehensive healthcare. That being the case, such nations should be better placed to deal with a pandemic such as covid.  Unfortunately, it is plain to see that there must be a lot more to it than this; with countries like the USA, UK and Italy all suffering badly despite their relatively advanced healthcare systems.

India has a comparatively small proportion of deaths (under 12 per 100,000 on official figures).  Despite this, India’s healthcare system is ranked only the 112th most efficient healthcare system in the world according to the WHO.  The USA is ranked 37th, the UK 18th and Italy 2nd.  Clearly there must be other factors at play.

One factor is potentially under-reporting.  One source estimated that this could mean that the true level of covid deaths is as much as five times larger than the official numbers in India.  However, even taking that into account, India’s death rates have still been significantly lower than those of the ten hardest hit nations.

Whilst the standard of healthcare has no doubt played some role here, there are clearly other aspects involved.

Population Demographics

One factor is population demographics.  Older patients are much more likely to become seriously ill and die from covid than younger ones.  Here India’s age demographics counts in her favour. 

Only 6% of India’s population is aged over 65.

Compare this to most European countries and the difference is striking – with around 20% of population in the hardest hit European countries being aged over 65.  Italy was the most vulnerable in this sense, with 23% aged over 65 before the pandemic hit.

Of the 10 hardest hit countries, 8 were nations where 19% or more of their populations were aged over 65.  The USA has a slightly younger demographic (16% over 65s) which would help to limit its vulnerability a little but is still clearly more exposed than somewhere like India.

Mexico represents the odd one out here.  Only 7% of Mexicans are aged over 65, giving the country a youthful demographic that is closer to that we see in countries like India.  We must therefore look for other explanations as to why Mexico has suffered so badly.

Urbanisation

Covid spreads best in environments where people live in close proximity to each other and, in general, people living in towns and cities are more likely to live in closer proximity with others.  Indeed, although India in general has seen lower death rates, it has nevertheless suffered more in major urban centres like Mumbai.

Many of the countries that have been worst hit have high levels of urbanisation which has likely contributed to higher death rates.  Belgium has an particularly high level of urbanisation (with 98% of its population living in urban environments), making it especially vulnerable in this sense.  Several other countries on the list have high urbanisation levels (80%+), namely the UK, USA, Mexico and Spain.  A country like India has much lower level of urbanisation overall (36%), which means its population is more widely dispersed and people in more rural environments are therefore less likely to come into frequent contact with others who might be affected.

Lockdowns

The lockdown measures taken by different countries at different times would also have an impact.  However, as these measures are often taken in response to the pandemic getting out of control in the first place, it is no surprise to find that many of the countries with the worst rates have had to impose longer and stricter lockdowns.

According to the Oxford Covid Government Response tracker those countries on our list that have taken the strictest measures for the longest periods of time over the course of the pandemic would include the UK and Italy.  This has not prevented either country from registering high rates however, although it has no doubt helped to prevent the problem getting even worse.

Based on these measures, those countries which have been laxer from this list would include Bulgaria (most notably), the Czech Republic, Hungary and Belgium.  So, it is possible that in these cases a more relaxed approach has contributed to a higher death rate.

Test and Trace

Another factor would be the efficiency of a country’s testing and tracing regime.  On this measure Mexico does especially badly, having only managed to test 41 out of 100,000 people in its population to-date – far fewer than any other country listed.

Nevertheless, the UK has now tested 1,585 people out of 100,000 – more than any other country on the table.  Despite this, the UK still has the third worst death rate overall.  But here the devil lies in the detail.  The UK has massively improved its testing regime over the course of the pandemic but, initially, the UK lagged behind somewhat.  During the first 60 days after the first five UK deaths the British managed to test just 23 people in 100,000.  This compares poorly to a number of other affected countries. 

Germany’s lower death rate overall is partly down to its test and trace efficiencies, especially during the early phase of the pandemic.  The Germans managed to test 37 people in every 100,000 during the first 60 days after their fifth death.

Of all the countries on the list, Mexico stands out as the most behind on test and trace at every stage of the pandemic.  No doubt this is a major reason as to why the country now ranks so highly in terms of death rates.

International Travel

Another factor is the level of international travel.  Countries that experience a large volume of people travelling through their airports and transport hubs are more likely to import covid from overseas. 

Of course, travel restrictions now apply across many nations but this was not always the case.  The UK and the USA would, under normal circumstances, see significantly more international traffic than most other countries.  And so they, along with Hungary, would have been most exposed to importing infection in the absence of strict border controls and quarantine measures.

The Czech situation

It is worth taking time to consider the Czech situation, since this country has experienced the most serious problems to-date. 

In terms of many of the risk factors nothing immediately stands out that would explain why it tops the list.  The level of urbanisation is high but not unduly so at 73%.  Likewise, its population demographic is not notably different from many other European countries (20% aged 65 plus).  It also receives limited international traffic compared to many other countries.

However, over the course of the pandemic its lockdown measures have been the second laxest of the ten worst affected countries.  It is also the case that its figures for test and trace do not appear as comprehensive as many others (although it appears to be testing a reasonable amount of people now).

According to Dr. Rastislav Maďar, the dean of the University of Ostrava’s medical school, the Czech situation can be attributed to three key mistakes.  The first of these was a failure to make mask wearing mandatory, the second a decision to open shops in the run up to Christmas and the third a failure to react quickly enough to the presence of new strains in the new year.

Key lessons learnt

Hopefully, it is clear to see that no single factor or measure can in and of itself entirely explain why any particular countries experiences a high death rate.  There are many factors working together in combination. 

However, the nature of the pandemic is such that it is clear that just a few missteps at any stage can very quickly lead to the situation rapidly deteriorating.  Hopefully, we can all learn from that and avoid making any future silly mistakes in the final stages of the pandemic.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future.  That means understanding change – whether that be changes in technology, culture, attitudes or behaviour. 

We provide a wide range of market research and data services.  You can learn more about our services on our website.  Also, please check out our collection of free research guides for more information on specific services offers.

Sources

John Hopkins University https://coronavirus.jhu.edu/map.html

United Nations https://population.un.org/wup/Download/

Our world in data: https://ourworldindata.org/grapher/covid-stringency-index

Worldbank: https://data.worldbank.org/indicator/SP.POP.65UP.TO.ZS?name_desc=false&view=chart

ITV https://www.itv.com/news/2020-12-09/is-indias-covid-19-death-rate-five-times-higher-than-official-figures-suggest

CNN https://edition.cnn.com/2021/02/28/europe/czech-republic-coronavirus-disaster-intl/index.html

Scroll to Top