Politics

Illustration of plant growth out of money

Do lower taxes stimulate growth?

Can the simple act of lowering taxes stimulate growth?  We now know (thanks to Liz Truss) that, when unfunded, tax cuts can certainly trigger economic chaos.  But even if they are properly funded, the question remains, will it really foster growth?

There are many who would argue that it would.  Some even go so far as to present a policy of low taxation as a silver bullet – a golden ticket to growth and prosperity! 

Bold claims indeed – but is it true?  What is the evidence? 

The search for evidence

How would we test the veracity of this concept?  It is not uncommon to see people quote anecdotal examples to support the contention.  But highlighting a particular instance where reduced tax has been followed by positive growth in a single country is potentially no more than a propaganda exercise.  What about the big picture?

There are many countries in the world.  Collectively they represent a wide variety of different economies and different tax regimes.  In a great many cases we have access to a lot of historic data on growth, tax policy and so on.  Surely it cannot be beyond the wit of man to compare tax policy to outcomes across many countries over time.  Can it?

This is not as simple as it sounds but it is possible.  The main problem is to make sure we compare apples with apples.

What is growth?

Firstly, we need to agree some kind of sensible definition of what we mean by ‘growth’.  At a simple level we might look at growth in terms of GDP.  However, GDP just tells us the total monetary value of an economy.  Growth in GDP is, of course, good.  However, it tells us very little about how wealthy people who live in that country are.  That is because it takes no account of population size.

Think of it this way:

  • 100 people living on an island earning $5k each per year,
  • So collectively they earn $500k in a year.
  • Only one person lives on the neighbouring island.  He earns $100k a year.
  • Which island is wealthier?

One island generates five times the money of the other.  However, clearly the guy living alone on the second island is wealthier.  For this reason, it is often better to look at per capita GDP (GDP per head of population) as a more accurate measure of wealth. 

Measuring taxation

It might seem like a simple thing to compare taxation between one economy and another until you come to try and do it.  It isn’t. 

If you think about it, any given country has a wide variety of different taxes and tax rates.  One might have high sales taxes and low income taxes.  Some might have forms of taxes that few other countries have.  In some countries individuals might pay limited personal tax but companies pay a lot (or vice versa). 

Hence, what we need to do is look at the overall tax burden when all these different elements are bundled up.  i.e., the proportion of wealth generated that is being taken in tax.

Oddities

In order to compare like with like we probably ought not to look at particularly odd or weird situations.  Ireland is a case in point.  So much so that Nobel Prize winning economist Paul Krugman labelled the phenomenon ‘Leprechaun economics’.  So, what happened?

In 2015, Apple changed the way it reported its accounts.  It shuffled a large chunk of revenue, previously reported elsewhere, into Ireland.  Now, suddenly, in a single quarter, Ireland’s national GDP jumped up 26%!  This had nothing to do with Ireland and everything to do with an Apple accounting policy change.

This distortion makes it very misleading to look at the Irish economy in the same way as other economies.

Of course, the biggest ‘oddity’ of all in recent years has been covid.  The pandemic has had a massive effect on global economies since it first struck in early 2020.  Attempting to measure the impact of tax policy on growth in the period after 2020 would therefore be very difficult to say the least.

Snapshot blindness

We often look at growth quarter by quarter or year on year.  That is fine but it is nevertheless potentially just a snapshot.  It can be distorted by unusual events that are unique to a particular country or year.  This can present a picture in a particular period that is very different from the underlying trend.

To measure the impact of taxation on growth we really need to be thinking in terms of longer-term impacts – performance over a longer period of time than just a year.  The sun coming out on one day does not a summer make.

Base effect

One of the biggest problems in measuring growth between different economies is known as ‘base effect’. 

So, what is base effect? 

Imagine the following scenario:

  • Brian earns $10k per year in his very first job.
  • After his first year, he gets a pay rise and a promotion.  He now earns $15k per year.
  • Brian’s salary has grown by 50%!
  • His friend’s mother Anne earns $200k per year.
  • Anne also gets a pay rise at the end of the year.  Her salary increases to $240k.
  • Her pay has only gone up by 20% but Brian’s has gone up by 50%!
  • So … is Brian doing better than Anne?

Of course not. 

This is a classic example of base effect.  It is the main reason why developing / emerging economies grow very fast compared to developed economies.  They are starting from a very small base!

If we are going to meaningfully compare economic performance, we need to be mindful of distortions like base effect.  Unfortunately, virtually all economies are of different sizes.  Nevertheless, for a meaningful comparison we ideally need to compare economies at a similar stage of economic development.

Data limitations

The reality is that more economic information is available for some countries than others and some report slightly differently.  The OECD strives to record comparable metrics for its member states and for several other key countries as far as is possible.

However, the fact remains that reasonably comparable information on tax and growth is not always available. 

This naturally places limitations on the extent to which we can meaningfully compare different countries.

Designing a test to measure the impact of tax

So, how might we go about attempting to measure the impact of tax policy on growth?  That, of course, sounds like a good title for an economics PhD thesis, which is not what we can realistically attempt in a short blog. 

Nevertheless, it is possible to look at some high-level measures for a basket of countries to see if we can observe any patterns over time.  So, with this in mind, let’s define our parameters:

  1. Time period: 2010-2019.  This gives us 10 years of data to look at that should capture trends over a reasonably long period.  It also has the merit of being the most recent period we can pick before covid starts to distort growth figures.
  2. We’ll measure the overall tax burden in comparison with growth in per capita GDP.
  3. To avoid the worst impacts of Base Effects, we’ll focus on a basket of developed economies (defined as having a per capita GDP in excess of $30k in 2010 at 2015 prices).
  4. This would potentially include Ireland, but due to the distortions unique to this country already mentioned, we will exclude them.

What tools can we use to find a pattern?

Let’s say we have data for 25 countries over 10 years.  It tells us, in each case, the tax burden and the growth experienced.  How do we know if there is a pattern, i.e., that they are inter-related in any way?

We have a couple of tests we can apply in the initial instance:

  1. Correlation
  2. Regression analysis

These tests measure slightly different things. You have probably heard of them and may have some familiarity with them.  However, perhaps you are one of the many people who is not entirely clear on the specific differences between the two.

What is correlation and how does it work?

Correlation is a statistical technique that measures the strength of the relationship between two sets of data.  It generates a number between -1 and +1 to indicate the strength of a relationship.  Technically we call this the coefficient (or simply ‘r’).

A positive number indicates that a pattern exists and that, as one number increases, the second number was also observed to increase.

A negative number indicates the reverse – that one number declines as the other increases.

A number close to 1 indicates a very strong relationship.  Close to 0 indicates a virtually non-existent relationship.

So, for example, we might compare number of ice cream sales to average temperature.  If we see a correlation of +0.8, this tells us that there is a strong relationship and that as temperature rises, ice cream sales also rise.

Correlation and causation

Correlation is not causation of course.  Correlation might tell us that ice cream sales rise when polar melt increases.  But that does not mean that buying ice cream causes the ice caps to melt!  In this example it just means that both are impacted by a third variable that we have failed to take into account – i.e. temperature.

In the case of what we’re trying to do here, a negative high correlation between tax burden and growth indicates a strong relationship between a high tax burden and poor growth.

As a general guide to the strength of a relationship we’d typically consider:

Correlation coefficientStrength of the relationship
1 or -1Perfect!
0.7 or -0.7Strong
0.5. or -0.5Moderate
0.3 or -0.3Weak
0None

What is regression analysis and how does it work?

Correlation seeks to measure the strength of a relationship but no more.  Regression analysis goes one step further.  It seeks to build a model to predict how one factor might change as a result of a change in another.

So, in this case, a regression analysis would seek to predict how an increase or decrease in the tax burden might impact growth.

Typically, regression produces two things.  The first is a formula that you can use to predict an outcome.  Hence, you can use a regression formula to tell you what growth you might expect if you set the tax burden at a particular level.

The second key output from a regression analysis is a measure of how reliably this formula can predict an outcome.  The technical term for this measure is ‘R2’ (not to be confused with the ‘r’ we use in correlation). Even more confusingly, also like correlation, that number can be between 0 and 1.  However it means something quite different.

If the number is 1, the equation is a strong fit to the data.  However, if the number is zero, the equation is pretty much junk.  So R2 simply tells us how well we can we predict growth rates based on the tax burden.

Comparing taxation levels with growth

In this analysis we compared a total of 25 OECD countries in the period 2010-2019.  We looked at the average overall tax burden in each case over the period and compared it to growth GDP per capita over the same period.  Ireland and countries with a GDP per capita under £30k were excluded.

The countries were:  Australia, Austria, Belgium, Canada, Czech Republic, Denmark, Finland, France, Germany, Iceland, Israel, Italy, Japan, Luxembourg, Netherlands, New Zealand, Norway, Portugal, Slovenia, South Korea, Spain, Sweden, Switzerland, UK, USA.

This results from this analysis are shown in the scatter plot below:

So, do high levels of tax impede growth?

The correlation between tax burden and growth rates experienced is -0.4.  This indicates there is a relationship between the two.  High tax levels do tend to correlate with weaker growth.  However, the relationship is moderate to weak.  We can see this clearly from the plot.  There are countries with a tax burden above 40% that experienced stronger growth than some with a tax burden under 30%.

In terms of regression, it is possible to plot a trend line that demonstrates how taxation in general might impact growth.  This indicates, for example, that an economy with a taxation level at around 25% might expect a growth of around 15% over ten years.  In contrast, the model suggests a tax rate at 45% might expect growth at only around half this level.  However, this model is a terrible fit, with an R2 of only 0.16!  We only need to compare the scatter plot to the trend line to see there are numerous exceptions to the rule.  There are plenty of examples of countries experiencing half or double the predicted growth at various different tax levels.

Conclusion

All this suggests that the tax burden only has a limited depressing effect on growth.  So an obsession with lowering taxes as a panacea remedy for delivering high growth is clearly naïve.  It is just one of several factors that need to be considered and, quite possibly, not even the most important.

Strong growth is clearly possible in countries with high levels of tax.  By the same token, having low tax rates does not guarantee strong growth by any means.

A more balanced view might be simply to say that tax has a limited depressive impact on growth.  For this reason, we could argue that it is better to keep it lower than higher.  But, by the same token, increasing the overall tax burden by 3% or even 4% would not necessarily depress growth.  Growth might even be stimulated if the additional revenues raised were invested wisely.

The idea that lowering tax is a silver bullet for stimulating growth is therefore unsupportable.

About Synchronix

Synchronix is a full-service market research consultancy.  We work with data to generate market insight to drive evidence-based decision making.

We specialise in serving b2b and niche consumer markets in Science, Engineering, Technology, Construction, Distribution, and Agriculture. Understanding and dealing with technical and industrial markets is our forte and this enables us to draw a greater depth of insight than non-specialist agencies.

Take a look at our website to find out more about our services.

Image of polling results

Just how accurate are Opinion Polls?

Just after the Brexit referendum result became known, in late June 2016, several newspapers ran stories on how the opinion polls “got it wrong”.

Typical of these was an article in the Guardian from 24th June 2016, with the headline “How the pollsters got it wrong on the EU referendum.”  In it the journalist observed:

“Of 168 polls carried out since the EU referendum wording was decided last September, fewer than a third (55 in all) predicted a leave vote.”

Of course, this is neither the first nor the last time pollsters have come in for some criticism from the media.  (Not that it seems to stop journalists writing articles about opinion polls of course).

But sensationalism aside, how accurate are polls?  In this article, I’ll explore how close (or far away) the polls came to predicting the Brexit result.  And what lessons we might draw from this for the future.

The Brexit Result

On 23rd June 2016, the UK voted by a very narrow margin (51.9% to 48.1%) in favour of Brexit.  However, if we just look at polls conducted near to the referendum, the general pattern was to predict a narrow result.  In that respect the polls were accurate. 

Taking an average of all these polls, the pattern for June showed an almost 50/50 split, with a slight edge in favour of the leave vote.  So, polls taken near to the referendum predicted a narrow result (which it was) and, if averaged, just about predicted a leave result (which happened).

Chart of brexit vote v. result polls

To compare the predictions with the results, I’ve excluded people who were ‘undecided’ at the time of the surveys.  Since anyone still ‘undecided’ on the day would presumably not have voted at all.

Of course, the polls did not get it spot on.  But that is because we are dealing with samples.  Samples always have a margin of error, so cannot be expected to be spot on.

Margin of error

The average sample size of polls run during this period was 1,799 (some had sample sizes as low as 800; others, several thousands).  However, on a sample size of 1,799 a 50/50 result would have a margin of error of +/- 2.3%.  That means if such a poll predicts 50% of people were going to vote leave, we could be 95% confident that between 48% and 52% would vote leave.

In the end, the average of all these polls came to within 1.7% of predicting the exact result.  That’s close!  It’s certainly within the margin we’d expect.

You might wonder why polls don’t use bigger samples to improve the margin?  If a result looks like being close, you’d think it might be worth using a large enough sample to reduce the error margin. 

Why not, for example, use a sample big enough to reduce the statistical error margin to 0.2% – a level that would provide a very accurate prediction?  To achieve that you’d need a sample of around 240,000!  That’s a survey costing a whopping 133 times more than the typical poll!  And that’s a cost people who commission polls would be unwilling to bear.

Data Collection

Not all polls are conducted in the same way, however. Different pollsters have different views as to the best ways to sample and weight their data.  Most of these differences are minor and all reflect the pollster’s experience of what approaches have delivered the most accurate results in the past.  Taking a basket of several polls together would create a prediction more likely to iron out any outliers or odd results resulting from such differences.

However, there is one respect in which polls fall into two potentially very different camps when it comes to methodology.  Some are conducted online, using self-completed surveys, where the sample is drawn from online consumer panels.  Others are conducted by telephone, using randomly selected telephone sample lists.

Both have their potential advantages and disadvantages:

  • Online: not everyone is online and not everyone is easy to contact online.  In particular older people might be less likely to use the internet that often.  So, any online sample would under-represent people with limited internet access.
  • Telephone:  not everyone is accessible by phone.  Many of these sample lists may be better in terms of reaching out to people with landlines than mobiles.  That might make it difficult to access some younger people who have no landline, or people using the telephone preference service.

But, that said, do these potential gaps make any difference?

Online vs Telephone Polling

So, returning to the Brexit result, is there any evidence to suggest either methodology provides a more accurate result?

chart of brexit result v. online and telephone polls

A simple comparison between the results predicted by the online polls vs the telephone polls conducted immediately prior to the referendum reveals the following:

  • Telephone polls: Overall, the average for these polls predicted a 51% majority in favour of remain.
  • Online polls: Overall, the average for these polls predicted a win for the leave vote by 50.5% (in fact it was 51.9%)

On the surface of things, the online polls appear to provide the more accurate prediction.  However, it’s not quite that simple.

Online polls are cheaper to conduct than telephone polls.  As a result, online polls can often afford to use larger samples.  This reduces the level of statistical error.  In the run up to the referendum the average online poll used a sample of 2,406 vs. an average of 1,038 for telephone polls.

The greater accuracy of the online polls over this period could therefore be largely explained simply by the fact that they were able to use larger samples.  As telephone is a more expensive medium, it is undeniably easier to achieve a larger sampler via the online route.

Accuracy over time

You might expect that, as people get nearer to the time of an election, they are more likely to come to a decision as to how they will vote.

However, our basket of polls in the month leading up to the Brexit vote shows no signs of the level of people who were ‘undecided’ changing.  During the early part of the month around 10% consistently stated they had not decided.  Closer to the referendum, this number remained much the same.

However, when we look at polls conducted in early June vs polls conducted later, we see an interesting contrast.  As it turns out, polls conducted early in June predicted a result closer to the actual result than those conducted closer to the referendum.

In fact, it seems that polls detected a shift in opinion that seems to have occurred around the time of the assassination of the MP, Jo Cox.

Chart of brexit result v polls taken before and after the killing of Jo Cox

Clearly, the average for the early month polls predicts a result very close to the final one.  The basket of later polls however, despite the advantage of larger samples, are off the mark by a significant margin.  It is these later polls that reinforced the impression in some people’s minds that the country was likely to vote Remain.

But why?

Reasons for mis-prediction

Of course, it is difficult to explain why surveys seemed to show a result that was a little way off the final numbers so close to the event. 

If we look at opinion surveys conducted several months before the referendum, then differences become easier to explain.  People change their minds over time and other people who are wavering will make up their minds.

A referendum conducted in January 2016 would have delivered a slightly different result to the one in June 2016.  This would be purely because a slightly different mix of people would have voted.  Also, because some people would have held a different opinion in January to that which they held in June.

However, by June 2016, you’d expect that a great many people would have made up their minds.

Logically, however, there are four reasons I can think of as to why there might be a mis-prediction by polls conducted during this period:

  1. Explainable statistical error margins.
  2. Unrepresentative approaches.
  3. Expressed intentions did not match actual behaviour.
  4. “Opinion Magnification”.

Explainable statistical error margins

Given the close nature of the vote, this is certainly a factor.  Polls of the size typically used here would find it very difficult to precisely predict a near 50/50 split. 

51.9% voted Leave.  A poll of 2000 could easily have predicted 49.7% (a narrow reverse result) and still be within an acceptable statistical error margin. 

18 of the 31 polls (58%) conducted in June 2016 returned results within the expected margin of statistical error vs the final result.  If they got the result wrong (which 3 did), this can be explained purely by the fact that the sample size was not big enough.

However, this means that 13 returned results that can’t be accounted for by expected statistical error alone. 

If we look at surveys conducted in early June, 6 returned results outside the expected bounds of statistical variance.  However, this was usually not significantly outside those bounds (just 0.27% on average). 

The same cannot be said of surveys conducted later in June.  Here polls were getting the prediction wrong by an average of 1.28% beyond the expected range.  All the surveys (7 in total) that predicted a result outside of the expected statistical range, consistently predicted a Remain win.

This is too much of a coincidence.  Something other than simple statistical error must have been at play.

Unrepresentative approaches

Not everyone is willing (or able) to answer Opinion Polls. 

Sometimes a sample will contain biases.  People without landlines would be harder to reach for a telephone survey.  People who never or rarely go online will be less likely to complete online surveys.

These days many pollsters make a point of promising a ‘quick turnaround’.  Some will boast that they can complete a poll of 2,000 interviews online in a single day.  That kind of turnaround is great news for a fast-paced media world but will under-represent infrequent internet users.

ONS figures for 2016 showed that regular internet use was virtually universal amongst the under 55s.  However, 12% of 55–64-year-olds, 26.9% of 65–74-year-olds and 61.3% of the over 75s had not been online for three months in June 2016. Older people were more likely to vote Leave.  But were the older people who don’t go online more likely to have voted Leave than those who do?

It is hard to measure the effect of such biases.  Was there anything about those who could not / would not answer a survey that means they would have answered differently?  Do they hold different opinions?

However, such biases won’t explain why the surveys of early June proved far better at predicting the result than those undertaken closer to the vote. 

Expressed intention does not match behaviour

Sometimes, what people do and what they say are two different things.  This probably doesn’t apply to most people.  However, we all know there are a few people who are unreliable.  They say they will do one thing and then go ahead and do the opposite.

Also, it is only human to change your mind.  Someone who planned to vote Remain in April, might have voted Leave on the day.  Someone undecided in early June, may have voted Leave on the day.  And some would switch in the other direction.

Without being able to link a survey answer to an actual vote, there is no way to test the extent to which peoples stated intentions fail to match their actual behaviour.

However, again, this kind of switching does not adequately explain the odd phenomenon we see in June polling.  How likely is it that people who planned to vote Leave in early June, switched to Remain later in the month and then switched back to Leave at the very last minute?  A few people maybe, but to explain the pattern we see, it would have to have been something like 400,000 people.  That seems very unlikely.

The Assassination of Jo Cox

This brings us back to the key event on 16 June – the assassination of Jo Cox.  Jo was a labour politician who strongly supported the Remain campaign and was a well-known champion of ethnic diversity.  Her assassin was a right-wing extremist who held virulently anti-immigration views.

A significant proportion of Leave campaigners cited better immigration control as a key benefit of leaving the EU.  Jo’s assassin was drawn from the most extremist fringe of such politics.

The boost seen in the Remain vote recorded in the polls that followed her death were attributed at the time to a backlash against the assassination.  That some people, shocked by the implications of the incident, were persuaded to vote Remain.  Doing so might be seen by some as an active rejection of the kind of extreme right-wing politics espoused by Jo’s murderer.

At the time it seemed a logical explanation.  But as we now know, it turned out not to be the case on the day.

Reluctant Advocates

There will be some people who will, by natural inclination, keep their voting intentions secret. 

Such people are rarely willing to express their views in polls, on social media, or even in conversation with friends and family.  In effect they are Reluctant Advocates.  They might support a cause but are unwilling to speak out in favour of it.  They simply don’t like drawing attention to themselves.

There is no reason to suspect that this relatively small minority would necessarily be skewed any more or less to Leave or Remain than everyone else.  So, in the final analysis, it is likely that the Leave and Remain voters among them will cancel each other out. 

The characteristic they share is a reluctance to make their views public.  However, the views they hold beyond this are not necessarily any different from most of the population.

An incident such as the assassination of Jo Cox can have one of two effects on public opinion (indeed it can have both):

  • It can prompt a shift in public opinion which, given the result, we now know did not happen.
  • It can prompt Reluctant Advocates to become vocal, resulting in a phenomenon we might call Opinion Magnification.

Opinion Magnification

Opinion Magnification creates the illusion that public opinion has changed or shifted to a greater extent than it actually has.  This will not only be detected in Opinion Polls but also in social media chatter – indeed via any media through which opinion can be voiced.

The theory being that the assassination of Jo Cox shocked Remain supporting Reluctant Advocates into becoming more vocal.  By contrast, it would have the opposite effect on Leave supporting Reluctant Advocates. 

The vast majority of Leave voters would clearly not have held the kind of extremist views espoused by Jo’s assassin.  Indeed, most would have been shocked and would naturally have tried to distance themselves from the views of the assassin as much as possible.  This fuelled the instincts of Leave voting Reluctant Advocates to keep a low profile and discouraged them from sharing their views.

If this theory is correct, this would explain the slight uplift in the apparent Remain vote in the polls.  This artificial uplift, or magnification, of Remain supporting opinion would not have occurred were it not for the trigger event of 16 June 2016.

Of course, it is very difficult to prove that this is what actually occurred.  However, it does appear to be the only explanation that fits the pattern we see in the polls during June 2016.

Conclusions

Given the close result of the 2016 referendum, it was always going to be a tough prediction for pollsters.  Most polls will only be accurate to around +/- 2% anyway, so it was ever a knife edge call.

However, in this case, in the days leading up to the vote, polls were not just out by around 2% in a few cases.  They were out by a factor of around 3%, on average, predicting a result that was the reverse of the actual outcome.

Neither statistical error, potential biases nor any disconnect between stated and actual voting behaviour can adequately account for the pattern we saw in the polls. 

A more credible explanation is distortion by Opinion Magnification prompted by an extraordinary event.  However, as the polling average shifted no more than 2-3%, the potential impact of this phenomenon appears to be quite limited.  Indeed, in a less closely contended vote, it would probably not have mattered at all.

Importantly, all this does not mean that polls should be junked.  But it does mean that they should not be viewed as gospel.  It also means that pollsters and journalists need to be alert for future Opinion Magnification events when interpreting polling results.

About Us

Synchronix Research offers a full range of market research services, polling services and market research training.  We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.

For any questions or enquiries, please email us: info@synchronixresearch.com

You can read more about us on our website.  

You can catch up with our past blog articles here.

Sources, references & further reading:

How the pollsters got it wrong on the EU referendum, Guardian 24 June 2016

ONS data on internet users in the UK

Polling results from Opinion Polls conducted prior to the referendum as collated on Wikipedia

FiveThirtyEight – for Nate Silver’s views on polling accuracy

Bumping elbows

Freedom Day – a British Experiment

19 July 2021 is “freedom day” – the day when the UK government has relaxed the last Covid restrictions in England.  But does it mark a return to normality (whatever that is), or is it, as some have suggested, a dangerous British experiment?

For many of us, the relaxing of restrictions is a welcome relief.  The cost in economic and social terms has been high.  Many businesses in the hospitality sector have really struggled to survive the restrictions.  That’s not to mention the impact on our social lives.  Covid has left some people feeling incredibly isolated and others struggling on reduced incomes. 

Most of us are keen to see life return to normality.  After all, we cannot go on like this forever.  Sooner or later, we must find a way to live with Covid.

Dangerous experiment?

However, some experts have dubbed “freedom day” as a dangerous British experiment. 

In an article in the Lancet, on 7 July 2021, the idea of relaxing restrictions on 19th was branded as dangerous and premature in a letter signed by 100 experts that has since been endorsed by many scientists around the world. 

These experts highlighted five of key risks:

  1. A significant proportion of the population are still unvaccinated (especially younger adults and children).  This will lead to high levels of infection running the risk of leaving many people with long term health problems.
  2. It risks high levels of infection amongst children that will accelerate when they return to school.  This will lead to further significant disruption of children’s education.
  3. Such high levels of infection represent fertile ground for dangerous new strains of Covid to emerge.  This includes the risk of a vaccine resistant strain emerging.
  4. It will lead to more hospitalisations which will place significant pressure on the NHS.
  5. Deprived and vulnerable communities are the most at risk and likely to be hardest hit by rising infection rates.

The experts recommended delaying easing restrictions further until the vaccination program has covered most of the population.  This would imply a delay until late August or possibly early September.

As it stands, on 19 July 2021, the government statistics show that nearly 88% of the population had had their first jab and 68% had received both jabs.  These are high numbers and positions our vaccination roll out well ahead of other countries.  However, it is nevertheless the case that one in three of us are not yet fully covered.

Infections are rising

Infections have risen significantly since the beginning of June, as restrictions have been eased and we have had to deal with the impact of the more infectious Delta variant. 

Graph of UK trends in cases: July 2020 - July 2021

The number of cases is fast climbing towards 60,000 and could easily hit 100,000 by the end of the month.  There seems little doubt now that case numbers will exceed the peak we saw back in January 2021.

The link between cases and hospitalisation: weakened but not gone

It has been claimed that new cases are not leading to new hospitalisations. 

A few weeks ago, we wrote a blog in which we created a Covid Index to allow us to view trends in cases, hospitalisations and deaths in parallel.  So now seems like a good time to revisit this to see how well the data supports this claim.

Unfortunately, if we look at the data, we can see that this claim is not entirely true.

It is now clear that we are seeing a gradual but distinct uptick in hospital admissions.  More cases does mean more hospital admissions, even if the link is now a lot weaker than before.

UK Covid trends INDEX: July 2020 - July 2021

The good news is that the level of increase is not tracking new cases anywhere near as closely as was the case back in January.  At that time rising cases led to a similar rise in both hospitalisations and deaths.  These followed on fairly quickly behind case reporting. 

Now, the immediate impact is much reduced and instead we are seeing a more gradual but nevertheless notable increase in hospitalisation.

Clearly, the fact that so many people are now vaccinated (especially amongst the most vulnerable groups) means that a much higher proportion of infections are now mild or asymptomatic.

A modest increase in deaths

A closer look at trends over the past month also show that as yet we are not seeing any major uplift in deaths.  However, the figures do show a slight overall increase.

Graph of Covid Trends INDEX: Summer 2021 Trends

Overall case numbers have grown to be around four times higher than the average for the past 12 months. 

Hospitalisations are rising at a slower rate but rising, nonetheless.  The current levels of hospital admissions sit are around 75% of the average number recorded over the past 12 months.  At the current rate of increase it is likely that hospital admissions will exceed that average before the end of the month.

Deaths, at present, show only a relatively modest increase since the start of June.  We’d have to say that at present it is too early to fully judge the likely medium-term impacts on death rates.  Death rates are still low at around 10%-15% the average of the rate we have seen in the past 12 months.  However, this is still up from a rate of under 5% recorded during late May and early June.

Likely trend

As vaccination continues to roll out, it will inevitably have an increasingly depressive impact on infections.  However, the relaxing of restrictions will serve as an accelerant – especially amongst young adults who are the least protected and the most likely to wish to congregate together in large social gatherings at pubs and nightclubs.

It is always difficult to predict numbers given the changing nature of the pandemic and the ongoing rolling impact of vaccinations.  However, it seems that by the middle of August we are likely to see:

  • Infection rates; will probably exceed 100,000 cases.
  • Hospital admissions; likely to be c.1,300 per day.
  • Deaths; likely to be c.50-70 per day.

This would mean that hospitalisations would be around the levels we were seeing in mid to late February and deaths at around the levels we were seeing in mid-to-late March. 

With infection rates about 100,000, many people would be forced to self-isolate based on current test and trace rules, which could be very disruptive.  Although the government plans to modify rules of self-isolation for fully vaccinated people, this will not happen until mid-August.

A race to roll out

We are now in a race between a virus that has been given significant freedom to spread on the one hand and a vaccination programme that is fast progressing to a point where the population will be fully vaccinated on the other.  These two factors combine to push the numbers in different directions.

Of course, we have to re-open society and adapt to live with this virus at some point.  Let’s just hope we have not made that step a month or two too soon.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future. 

You can read more about us on our website.  

If you wish to follow our weekly blog you can view all out past articles on our website here.

If you have any specific questions about our services, please contact us.

Sources

Government Coronavirus Data

Lancet, 7 July 2021

The Guardian, 16 July 2021

UK Elections 2021 – How is the political landscape changing?

How is the political landscape changing? As the dust settles on the May 2021 elections, it is worth taking a closer look at the results to see what they might tell us.   

England

The overall results for labour have been bad across the English elections.  Labour has lost seats across many areas and, at the same time, the Tories have picked seats up.

Overall, the Tories have increased their number of councillors in contested areas by + 11%, labour have declined by -20%. 

The LibDems remain the third largest party but have seen little real change.

Other important highlights are that UKIP has now disappeared from the political scene and Reform has failed to hoover up those old seats.  The main beneficiary from the demise of UKIP has clearly been the Tories. 

There has also been a dramatic increase in the amount of Green councillors (more than doubling their number of councillors in contested areas to 151). 

One final important highlight is the fact that there have been gains across the board for a mix of independents (an 18% increase to 255 councillors).

Labour’s highest profile loss was, of course, Hartlepool.  However, here, the story has more to it than meets the eye.

Hartlepool

In Hartlepool the Tories saw their vote increase from 28.9% at the last general election to 51.9% on May 6th.   Much of this gain is likely the result of the disappearance of the Brexit Party as a meaningful political force.  25.8% voted BP in 2019 which, if added to the Tory vote at that time, would total 54.7% – similar to the Tory vote this time around.

Whilst this may explain the Tory win, it does not explain the reduction in the Labour vote (falling from 37.7% in the last general election to 28.7%).  Smaller parties like the Greens may have taken votes from Labour but as the Greens only accounted for 1.2% of the vote, this can hardly explain it.

One point to remember is that the incumbent MP was forced to leave office because of allegations of sexual harassment and victimization.  This may have served to turn some voters away from Labour – but the question remains that whatever their reasons were for not voting Labour, who did those voters turn to?

A big factor appears to have been an independent candidate – Sam Lee, a local businesswoman.   Sam positioned herself as someone who stood up for the local business community and a Westminster outsider.  A vote for her, she claimed, would “show politicians that we are sick of their party games and empty promises”. A vote for her then, was, in many ways, a rejection of the status quo.  Sam polled 9.7% of the vote and, as she didn’t stand in 2019, it looks like she may have taken a fair number of votes away from Labour.

No change..?

So, in 2021, it may be that Hartlepool saw no real significant switch from Labour to Tory at all – that had already happened in 2019, when large numbers of voters changed to the Brexit Party.  And having switched to the BP, the move to voting Tory seems to have been an easy step for many. 

The vote for Sam Lee is significant though.  It shows a considerable number of people prepared to vote for someone outside the political establishment, and a desire amongst many for something quite different from the established parties.

The Red Wall weakens in the North and Midlands

In general, results in the North and Midlands have shown the biggest Tory gains plus the most serious Labour losses.

Again, the explanation seems to lie mainly with picking up former Brexit Party voters rather than outright direct conversion of 2019 Labour voters. 

The biggest Tory gains compared with previous local elections were in Yorkshire and Humberside (+11.2% up), the West Midlands (+9.7%) and the North East (+7.3%).

These marry up with the more significant Labour losses – Yorkshire and Humberside (-4.5%), the West Midlands (-5%) and the North East (-4%).

Labour losses and Tory gains were less significant elsewhere in England.

So, are we witnessing a sea-change in voting patterns in the North driven by regional factors or is it something more complicated than this? 

It is true that the so-called Red-Wall has clearly been seriously eroded in many parts of the North.  However, Labour has performed well in the area in certain large cities.  Could it be that this is more about how voting patterns are changing in metropolitan v non-metropolitan areas, than it is about changing attitudes in the North?

The Metropolitan Effect

Labour has performed well in northern metropolitan areas such as Liverpool and Manchester, showing that it can hold its own there under the right conditions.

In Manchester, Labour even gained ground.  Perhaps this was due in no small part to the charismatic Andy Burnham but the numbers tell a convincing tale.

Labour increased its share of the vote on the first choice for Mayor from 63.4% in 2017 to 67.3% in 2021. The Tories slipped from 22.7% to 19.6%.  Here, the lesser parties were very much out of the picture.

The Labour Mayoral vote also held strong in Liverpool.  No sign of any cracks in the Red Wall in these major northern cities; a quite different story from the story we see in less urban areas. 

So why is the metropolitan vote in the North so different from the trends we see elsewhere?

The Role of ‘Englishness’

Will Jennings, Professor of Political Science and Public Policy at Southampton University, feels that the migration of voters to the Brexit Party and then to the Tories has much to do with the emergence of a strong English national identity.  This tends to view the Tories as a party that is positive about the English and Labour as essentially mediocre about, or even hostile to, an English cultural identity.

Evidence for this can be found in BSA research that looked at the motives behind voting Leave/Remain in the Brexit vote.  This found that people who identified themselves as ‘British’ and not ‘English’ in England, voted 62% in favour of Remain.  However, 72% of people who identified themselves as ‘English’ and not ‘British’, voted in favour of Leave.

This sentiment, Jennings would argue, has translated into a vote for the Brexit Party in 2019 and has now converted into a Tory vote.  Parts of the North which have switched to Tory are often areas where this sense of ‘Englishness’ is strongest.

However, cities such as Manchester and Liverpool are more cosmopolitan in character and have strong and distinct local identities (as Mancunians or Scouse).  As a result, the tendency to strongly identify with an ‘English’ nationalist identity is less evident.  This in turn translates into a much-reduced willingness to switch away from Labour to the BP or Tories.

Treating the ‘North’ as a single homogenous area would therefore appear to be a gross over-simplification.

A different picture in Southern England

In the South, there was less dramatic change in voting patterns.  Although we saw some shift to the Tories in the council elections, the change was nowhere near as significant or dramatic as that seen in political landscape in the North.

However, there are a couple of interesting results that are worth pulling out – both Labour Mayoral wins.

The first is the result for Cambridge and Peterborough.   On the first choice alone, the Tories would have won (Tory 41%, Labour 33%, LibDems 27%).  However, once the LibDems were knocked out of the picture the second-choice votes for these voters were overwhelmingly Labour.  The result enabled Labour to win (just) by 51%. 

The second result is for the West of England Mayor (which covers Bristol, Bath and North East Somerset and South Gloucestershire).

Here Labour increased its vote from 22.2% to 33.4% in the first round.  The Tories also actually did a little better (increasing from 27.3% to 28.6%).  The LibDems, again, saw limited but negative change (20.2% down to 16.3%) and the Greens again, saw progress (up to 21.7% from 11.2%). 

Again it is worth noting that the presence of a strong independent candidate can affect the results.  In 2017 such a candidate polled 15% of the vote but this time around, no such candidate stood.

This does raise the possibility that a future cooperative arrangement between Labour, Greens and LibDems in the south could potentially cause significant damage to the Tories in some parts of the southern political landscape.  However distant and unlikely such a prospect might seem today.

What about Scotland?

The results in Scotland, of course, have been quite different from anything we see in England.

Here we have seen the SNP make modest progress – increasing their share of the vote from 46.5% of constituency votes at the last parliamentary election in 2016 to 47.7% now.  The Tories saw little change in fortune (21.9% share now vs 22% in 2016).  Labour, too, saw limited change (21.6% down from 22.6%).

The SNP have consolidated and built on their dominant position even if they have not achieved an outright majority.  Some have suggested that they owe their electoral success at least in part to the general perception that Nicola Sturgeon has handled the Covid crisis well. 

One might make a similar observation across the UK.  This is that the light beckoning at the end of the Covid tunnel tends to favour the incumbent administrations – the SNP in Scotland and the Tories in England.  There is no doubt some truth in this and, if so, we can see this pattern repeated in Wales.

What about Wales?

Wales bucked the pro-Tory trend we see in England.  Here comparisons with England are more interesting because Wales, like England, voted Leave (whereas Scotland did not).  However, UKIP and latterly the Brexit Party have never been quite the force in Wales that they were in many parts of England (the Brexit Party registered only 5% of the Welsh vote in the 2019 election). 

Here the Tories have not managed to benefit anywhere near so much from picking up former UKIP or Brexit Party voters.  In 2016 the Tories got 21.1% of the constituency vote, which they have been able to increase to 25.1% this time around.  This no doubt reflects picking up some of the old UKIP votes (which accounted for 12.5% of the votes in the 2016 assembly election).

However, in Wales Labour have increased their share of the vote from 34.7% to 39.9%. Plaid Cymru have remained at pretty much the same level (20.7% vs 20.5% last time).

As with elsewhere, it may well be that the incumbent administration is benefiting from the feeling that we are headed in the right direction Covid-wise. 

The lack of the BP/UKIP factor in Wales in the political landscape meant there were only a limited number of these voters for the Tories to potentially pick up.  This supports Professor Jennings’ view that it is the sense of Englishness that has driven a migration of votes from labour, via UKIP and the Brexit Party, to the Tories.  The absence of the ‘Englishness’ factor in Wales potentially explains why such a pattern has not been repeated here.

In conclusion

It is probably worth concluding by saying that we ought to be very careful in what we read into these results.  The 2021 elections have occurred at a time when so much is in a state of flux.  The Covid crisis makes these times most unusual indeed. 

In a few years’ time when (hopefully) Covid no longer dominates our lives, we will be living in a vastly different world.   Also, we cannot yet say what the longer-term impacts of Brexit may be.  We are also only at the very beginning of the Tory levelling-up agenda.  Much has been promised, but what will be delivered?

This election has highlighted some important emerging trends, but the events of the next few years could yet see things change quite radically.

About Synchronix

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future.  That means understanding change – whether that be changes in technology, culture, attitudes or behaviour. 

We offer market research services, opinion polling and content creation services.  You can read more about this on our website.  

If you wish to follow our weekly blog you can view all out past articles on our website here.

Sources

Election Results from BBC England

BBC Scottish Election Results

Welsh Election Results from the BBC

Sky News Election Takeaways

BSA

Hold the Front Page

Scroll to Top