Paul Watts

Illustration of plant growth out of money

Do lower taxes stimulate growth?

Can the simple act of lowering taxes stimulate growth?  We now know (thanks to Liz Truss) that, when unfunded, tax cuts can certainly trigger economic chaos.  But even if they are properly funded, the question remains, will it really foster growth?

There are many who would argue that it would.  Some even go so far as to present a policy of low taxation as a silver bullet – a golden ticket to growth and prosperity! 

Bold claims indeed – but is it true?  What is the evidence? 

The search for evidence

How would we test the veracity of this concept?  It is not uncommon to see people quote anecdotal examples to support the contention.  But highlighting a particular instance where reduced tax has been followed by positive growth in a single country is potentially no more than a propaganda exercise.  What about the big picture?

There are many countries in the world.  Collectively they represent a wide variety of different economies and different tax regimes.  In a great many cases we have access to a lot of historic data on growth, tax policy and so on.  Surely it cannot be beyond the wit of man to compare tax policy to outcomes across many countries over time.  Can it?

This is not as simple as it sounds but it is possible.  The main problem is to make sure we compare apples with apples.

What is growth?

Firstly, we need to agree some kind of sensible definition of what we mean by ‘growth’.  At a simple level we might look at growth in terms of GDP.  However, GDP just tells us the total monetary value of an economy.  Growth in GDP is, of course, good.  However, it tells us very little about how wealthy people who live in that country are.  That is because it takes no account of population size.

Think of it this way:

  • 100 people living on an island earning $5k each per year,
  • So collectively they earn $500k in a year.
  • Only one person lives on the neighbouring island.  He earns $100k a year.
  • Which island is wealthier?

One island generates five times the money of the other.  However, clearly the guy living alone on the second island is wealthier.  For this reason, it is often better to look at per capita GDP (GDP per head of population) as a more accurate measure of wealth. 

Measuring taxation

It might seem like a simple thing to compare taxation between one economy and another until you come to try and do it.  It isn’t. 

If you think about it, any given country has a wide variety of different taxes and tax rates.  One might have high sales taxes and low income taxes.  Some might have forms of taxes that few other countries have.  In some countries individuals might pay limited personal tax but companies pay a lot (or vice versa). 

Hence, what we need to do is look at the overall tax burden when all these different elements are bundled up.  i.e., the proportion of wealth generated that is being taken in tax.

Oddities

In order to compare like with like we probably ought not to look at particularly odd or weird situations.  Ireland is a case in point.  So much so that Nobel Prize winning economist Paul Krugman labelled the phenomenon ‘Leprechaun economics’.  So, what happened?

In 2015, Apple changed the way it reported its accounts.  It shuffled a large chunk of revenue, previously reported elsewhere, into Ireland.  Now, suddenly, in a single quarter, Ireland’s national GDP jumped up 26%!  This had nothing to do with Ireland and everything to do with an Apple accounting policy change.

This distortion makes it very misleading to look at the Irish economy in the same way as other economies.

Of course, the biggest ‘oddity’ of all in recent years has been covid.  The pandemic has had a massive effect on global economies since it first struck in early 2020.  Attempting to measure the impact of tax policy on growth in the period after 2020 would therefore be very difficult to say the least.

Snapshot blindness

We often look at growth quarter by quarter or year on year.  That is fine but it is nevertheless potentially just a snapshot.  It can be distorted by unusual events that are unique to a particular country or year.  This can present a picture in a particular period that is very different from the underlying trend.

To measure the impact of taxation on growth we really need to be thinking in terms of longer-term impacts – performance over a longer period of time than just a year.  The sun coming out on one day does not a summer make.

Base effect

One of the biggest problems in measuring growth between different economies is known as ‘base effect’. 

So, what is base effect? 

Imagine the following scenario:

  • Brian earns $10k per year in his very first job.
  • After his first year, he gets a pay rise and a promotion.  He now earns $15k per year.
  • Brian’s salary has grown by 50%!
  • His friend’s mother Anne earns $200k per year.
  • Anne also gets a pay rise at the end of the year.  Her salary increases to $240k.
  • Her pay has only gone up by 20% but Brian’s has gone up by 50%!
  • So … is Brian doing better than Anne?

Of course not. 

This is a classic example of base effect.  It is the main reason why developing / emerging economies grow very fast compared to developed economies.  They are starting from a very small base!

If we are going to meaningfully compare economic performance, we need to be mindful of distortions like base effect.  Unfortunately, virtually all economies are of different sizes.  Nevertheless, for a meaningful comparison we ideally need to compare economies at a similar stage of economic development.

Data limitations

The reality is that more economic information is available for some countries than others and some report slightly differently.  The OECD strives to record comparable metrics for its member states and for several other key countries as far as is possible.

However, the fact remains that reasonably comparable information on tax and growth is not always available. 

This naturally places limitations on the extent to which we can meaningfully compare different countries.

Designing a test to measure the impact of tax

So, how might we go about attempting to measure the impact of tax policy on growth?  That, of course, sounds like a good title for an economics PhD thesis, which is not what we can realistically attempt in a short blog. 

Nevertheless, it is possible to look at some high-level measures for a basket of countries to see if we can observe any patterns over time.  So, with this in mind, let’s define our parameters:

  1. Time period: 2010-2019.  This gives us 10 years of data to look at that should capture trends over a reasonably long period.  It also has the merit of being the most recent period we can pick before covid starts to distort growth figures.
  2. We’ll measure the overall tax burden in comparison with growth in per capita GDP.
  3. To avoid the worst impacts of Base Effects, we’ll focus on a basket of developed economies (defined as having a per capita GDP in excess of $30k in 2010 at 2015 prices).
  4. This would potentially include Ireland, but due to the distortions unique to this country already mentioned, we will exclude them.

What tools can we use to find a pattern?

Let’s say we have data for 25 countries over 10 years.  It tells us, in each case, the tax burden and the growth experienced.  How do we know if there is a pattern, i.e., that they are inter-related in any way?

We have a couple of tests we can apply in the initial instance:

  1. Correlation
  2. Regression analysis

These tests measure slightly different things. You have probably heard of them and may have some familiarity with them.  However, perhaps you are one of the many people who is not entirely clear on the specific differences between the two.

What is correlation and how does it work?

Correlation is a statistical technique that measures the strength of the relationship between two sets of data.  It generates a number between -1 and +1 to indicate the strength of a relationship.  Technically we call this the coefficient (or simply ‘r’).

A positive number indicates that a pattern exists and that, as one number increases, the second number was also observed to increase.

A negative number indicates the reverse – that one number declines as the other increases.

A number close to 1 indicates a very strong relationship.  Close to 0 indicates a virtually non-existent relationship.

So, for example, we might compare number of ice cream sales to average temperature.  If we see a correlation of +0.8, this tells us that there is a strong relationship and that as temperature rises, ice cream sales also rise.

Correlation and causation

Correlation is not causation of course.  Correlation might tell us that ice cream sales rise when polar melt increases.  But that does not mean that buying ice cream causes the ice caps to melt!  In this example it just means that both are impacted by a third variable that we have failed to take into account – i.e. temperature.

In the case of what we’re trying to do here, a negative high correlation between tax burden and growth indicates a strong relationship between a high tax burden and poor growth.

As a general guide to the strength of a relationship we’d typically consider:

Correlation coefficientStrength of the relationship
1 or -1Perfect!
0.7 or -0.7Strong
0.5. or -0.5Moderate
0.3 or -0.3Weak
0None

What is regression analysis and how does it work?

Correlation seeks to measure the strength of a relationship but no more.  Regression analysis goes one step further.  It seeks to build a model to predict how one factor might change as a result of a change in another.

So, in this case, a regression analysis would seek to predict how an increase or decrease in the tax burden might impact growth.

Typically, regression produces two things.  The first is a formula that you can use to predict an outcome.  Hence, you can use a regression formula to tell you what growth you might expect if you set the tax burden at a particular level.

The second key output from a regression analysis is a measure of how reliably this formula can predict an outcome.  The technical term for this measure is ‘R2’ (not to be confused with the ‘r’ we use in correlation). Even more confusingly, also like correlation, that number can be between 0 and 1.  However it means something quite different.

If the number is 1, the equation is a strong fit to the data.  However, if the number is zero, the equation is pretty much junk.  So R2 simply tells us how well we can we predict growth rates based on the tax burden.

Comparing taxation levels with growth

In this analysis we compared a total of 25 OECD countries in the period 2010-2019.  We looked at the average overall tax burden in each case over the period and compared it to growth GDP per capita over the same period.  Ireland and countries with a GDP per capita under £30k were excluded.

The countries were:  Australia, Austria, Belgium, Canada, Czech Republic, Denmark, Finland, France, Germany, Iceland, Israel, Italy, Japan, Luxembourg, Netherlands, New Zealand, Norway, Portugal, Slovenia, South Korea, Spain, Sweden, Switzerland, UK, USA.

This results from this analysis are shown in the scatter plot below:

So, do high levels of tax impede growth?

The correlation between tax burden and growth rates experienced is -0.4.  This indicates there is a relationship between the two.  High tax levels do tend to correlate with weaker growth.  However, the relationship is moderate to weak.  We can see this clearly from the plot.  There are countries with a tax burden above 40% that experienced stronger growth than some with a tax burden under 30%.

In terms of regression, it is possible to plot a trend line that demonstrates how taxation in general might impact growth.  This indicates, for example, that an economy with a taxation level at around 25% might expect a growth of around 15% over ten years.  In contrast, the model suggests a tax rate at 45% might expect growth at only around half this level.  However, this model is a terrible fit, with an R2 of only 0.16!  We only need to compare the scatter plot to the trend line to see there are numerous exceptions to the rule.  There are plenty of examples of countries experiencing half or double the predicted growth at various different tax levels.

Conclusion

All this suggests that the tax burden only has a limited depressing effect on growth.  So an obsession with lowering taxes as a panacea remedy for delivering high growth is clearly naïve.  It is just one of several factors that need to be considered and, quite possibly, not even the most important.

Strong growth is clearly possible in countries with high levels of tax.  By the same token, having low tax rates does not guarantee strong growth by any means.

A more balanced view might be simply to say that tax has a limited depressive impact on growth.  For this reason, we could argue that it is better to keep it lower than higher.  But, by the same token, increasing the overall tax burden by 3% or even 4% would not necessarily depress growth.  Growth might even be stimulated if the additional revenues raised were invested wisely.

The idea that lowering tax is a silver bullet for stimulating growth is therefore unsupportable.

About Synchronix

Synchronix is a full-service market research consultancy.  We work with data to generate market insight to drive evidence-based decision making.

We specialise in serving b2b and niche consumer markets in Science, Engineering, Technology, Construction, Distribution, and Agriculture. Understanding and dealing with technical and industrial markets is our forte and this enables us to draw a greater depth of insight than non-specialist agencies.

Take a look at our website to find out more about our services.

Customer chatting over laptop in store

Understanding Customer Experience

Customer experience surveys are widely used in business today.  They are widely regarded as a great way to get invaluable feedback from customers.

But whether you’ve been running a customer survey for years, or whether you’ve never run one, it’s worth reflecting on the benefits they can offer.  And it’s also worth considering how to go about getting the most from them.

Why run a customer experience survey?

Quite simply happy customers are loyal customers and loyal customers deliver more recommendations and more repeat business.

A well-designed customer satisfaction survey will deliver several important business benefits:

  1. By monitoring customer opinion, you’ll have early warning of any potential problems that might cause you to lose revenue further down the line.
  2. It will tell you which improvements will do the most to boost customer loyalty and sales.
  3. It can also tell you what improvements/changes are unlikely to deliver much value.
  4. In short, it prioritises what you need to do to nurture a loyal customer base.

So, when a customer survey is well designed and used effectively, it serves as a positive vehicle for tangible business improvements.

However, it is possible to get it wrong.  And if this happens you might end up with something that delivers limited actionable insight.

So, how do you ensure your customer survey falls into the former category rather than the latter? Here’s a list of pointers I’ve pulled together, based on over 30 years’ experience of designing and running such programs:

Make sure you set the right objectives to start with

Let’s start at the very beginning by looking at the overall business objectives for a customer survey.  Get that wrong and you’ll be lucky to get any real value from the exercise.

The most common reason why such surveys can fail is when they’ve been designed as a box ticking exercise from the very start. If a survey is just used to provide internal reassurance that all is well, it isn’t ever going to serve as an effective agent for change.

Fortunately, this kind of problem is rare.  A more common issue is that sometimes these surveys can be used exclusively in a limited, tactical, way. Here performance scores for each area of the business might be used to directly inform such things as bonuses and performance reviews.  That’s all fine but if this is the only tangible way in which the survey is used, it’s a missed opportunity.

Don’t get me wrong, there’s a value in using surveys to tactically monitor business performance.  But their true value lies in using them to guide business improvement at a strategic level.  If we lose sight of this goal, we won’t ever get the most out of such surveys.

Takeaway:  The primary goal of a customer experience survey should move beyond monitoring performance to providing direction for business improvement.

Using standard templates is just a starting point

Moving on to how we go about designing a survey, the next trap people can fall into is taking the easy option.  By that, I mean running a generic customer survey based on standard templates.

It is easy enough to find standard templates for customer experience surveys online.  Most DIY survey tools will provide them as part of the service.  Standard questions for such things as NPS and generic performance scoring are readily available.

But standard templates are, as the name implies, standard.  They are designed to serve as a handy starting point – not an end in themselves.

There is nothing in any of them that will be unique or specific to any business.  As a result, if you rely purely on a standard template, you’ll only get generic feedback.

That might be helpful up to a point, but to receive specific, actionable, insight from a survey you need to tailor it to collect specific, actionable, feedback from your customers.  And that means you need to ask questions about issues that are specific to your business, not any business.

Takeaway:  Only ever use standard templates as a starting point.  Always tailor customer experience surveys to the specific needs of your business.

Avoid vague measures, focus on actionable ones

It may sound obvious, but it’s important to make sure you are measuring aspects of business performance that are clearly defined and meaningful.  That means it needs to be specific, so there is no confusion over what it might or might not mean when you come to look at the results.

Leaving these definitions too broad or too generic can make it very hard to interpret the feedback you get.

Let’s take an example – ‘quality’.  What exactly does that mean?  It might mean slightly different things in different industries.  And it might mean different things to different people, even within the same organisation.

If your product is machinery, product quality could refer to reliability and its ability to run with minimal downtime.  However, it might also relate to the quality of work the machine produces.  Or perhaps, under certain circumstances, it might refer more to accuracy and precision?  When you think about it, ‘quality’ could encompass a range of different things.

To avoid potentially confusing outcomes of this sort you need to use more specific phrasing.  That way, when you identify an area that needs improvement, it’s clear what needs to be done.

Takeaway:  Ensure you’re testing specific measures of business performance. 

Always Provide a mechanism for open feedback

Not everyone will answer open-ended questions by any means.  Indeed, surveys can fall into the trap of asking too many, leading to a poor response.

However, one or two well targeted open questions will provide invaluable feedback.  It is a golden opportunity to pick up on issues and opportunities for improvement that you haven’t thought of, but which your customers have!

Takeaway: Always include one or two well targeted open questions to elicit feedback from customers.  But don’t add too many or response rates will suffer, and the quality of answers will be diluted.

Ensuring insight is actionable

Of course, you might already have a customer experience survey.  Perhaps it has been running for years.  If it is delivering good value then happy days.  However, that’s not always the case.

Sometimes people find that the outputs from an existing customer experience survey are not particularly actionable.  If that is the case, then it’s a clear warning sign you’re doing something wrong.

There are only two reasons why this ever happens:

1st reason:   Senior management in the business are incapable of driving positive change, even if they are provided with clear direction as to what they should be doing.

2nd reason:  The survey was poorly designed in the first place and is unlikely to ever deliver anything actionable.

Unfortunately, the first of these problems can’t be solved by a survey or any other form of market insight come to that!  But it is possible to do something about the latter!

The answer is simple – you need to redesign your customer experience survey.  Don’t keep re-running it and repeating the same old mistakes.

Takeaway: If your customer experience survey is not delivering actionable insight, stop running it.  You need to either re-design it or save your money and not bother!

Legacy questions and survey bloat

Has your customer survey been running for several years now?  Does the following pattern sound familiar?

  • Every year, the previous year’s questionnaire gets circulated to survey stakeholders for feedback.
  • Each stakeholder comes back with feedback that involves adding new questions, but they don’t often suggest taking any of the old questions away.
  • Some of the new questions (perhaps all) relate to some very specific departmental initiatives.
  • The questionnaire gets longer.
  • The response rate goes down as a result.
  • A year goes by and it may not be entirely clear what has been done with the outputs of some of these questions.
  • The process repeats itself….

Of course, there is a benefit in maintaining consistency.  However, there’s little point measuring things that are no longer relevant for the business.

It may well be time for a more fundamental review. 

Maybe even consider going back to square one and running some qualitative research with customers. Could you be missing something vitally important that a few open conversations with customers could reveal?

Alternatively, maybe you need to run some internal workshops.  How well do current priorities really align with legacy questions in the survey?

Takeaway: If you think your customer survey has become overly bloated with legacy questions, don’t shy away from carrying out a full review.  

About Us

Synchronix Research offers a full range of market research services, polling services and market research training.  We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.

For any questions or enquiries, please email us: info@synchronixresearch.com

You can read more about us on our website.  

You can catch up with our past blog articles here.

Image of polling results

Just how accurate are Opinion Polls?

Just after the Brexit referendum result became known, in late June 2016, several newspapers ran stories on how the opinion polls “got it wrong”.

Typical of these was an article in the Guardian from 24th June 2016, with the headline “How the pollsters got it wrong on the EU referendum.”  In it the journalist observed:

“Of 168 polls carried out since the EU referendum wording was decided last September, fewer than a third (55 in all) predicted a leave vote.”

Of course, this is neither the first nor the last time pollsters have come in for some criticism from the media.  (Not that it seems to stop journalists writing articles about opinion polls of course).

But sensationalism aside, how accurate are polls?  In this article, I’ll explore how close (or far away) the polls came to predicting the Brexit result.  And what lessons we might draw from this for the future.

The Brexit Result

On 23rd June 2016, the UK voted by a very narrow margin (51.9% to 48.1%) in favour of Brexit.  However, if we just look at polls conducted near to the referendum, the general pattern was to predict a narrow result.  In that respect the polls were accurate. 

Taking an average of all these polls, the pattern for June showed an almost 50/50 split, with a slight edge in favour of the leave vote.  So, polls taken near to the referendum predicted a narrow result (which it was) and, if averaged, just about predicted a leave result (which happened).

Chart of brexit vote v. result polls

To compare the predictions with the results, I’ve excluded people who were ‘undecided’ at the time of the surveys.  Since anyone still ‘undecided’ on the day would presumably not have voted at all.

Of course, the polls did not get it spot on.  But that is because we are dealing with samples.  Samples always have a margin of error, so cannot be expected to be spot on.

Margin of error

The average sample size of polls run during this period was 1,799 (some had sample sizes as low as 800; others, several thousands).  However, on a sample size of 1,799 a 50/50 result would have a margin of error of +/- 2.3%.  That means if such a poll predicts 50% of people were going to vote leave, we could be 95% confident that between 48% and 52% would vote leave.

In the end, the average of all these polls came to within 1.7% of predicting the exact result.  That’s close!  It’s certainly within the margin we’d expect.

You might wonder why polls don’t use bigger samples to improve the margin?  If a result looks like being close, you’d think it might be worth using a large enough sample to reduce the error margin. 

Why not, for example, use a sample big enough to reduce the statistical error margin to 0.2% – a level that would provide a very accurate prediction?  To achieve that you’d need a sample of around 240,000!  That’s a survey costing a whopping 133 times more than the typical poll!  And that’s a cost people who commission polls would be unwilling to bear.

Data Collection

Not all polls are conducted in the same way, however. Different pollsters have different views as to the best ways to sample and weight their data.  Most of these differences are minor and all reflect the pollster’s experience of what approaches have delivered the most accurate results in the past.  Taking a basket of several polls together would create a prediction more likely to iron out any outliers or odd results resulting from such differences.

However, there is one respect in which polls fall into two potentially very different camps when it comes to methodology.  Some are conducted online, using self-completed surveys, where the sample is drawn from online consumer panels.  Others are conducted by telephone, using randomly selected telephone sample lists.

Both have their potential advantages and disadvantages:

  • Online: not everyone is online and not everyone is easy to contact online.  In particular older people might be less likely to use the internet that often.  So, any online sample would under-represent people with limited internet access.
  • Telephone:  not everyone is accessible by phone.  Many of these sample lists may be better in terms of reaching out to people with landlines than mobiles.  That might make it difficult to access some younger people who have no landline, or people using the telephone preference service.

But, that said, do these potential gaps make any difference?

Online vs Telephone Polling

So, returning to the Brexit result, is there any evidence to suggest either methodology provides a more accurate result?

chart of brexit result v. online and telephone polls

A simple comparison between the results predicted by the online polls vs the telephone polls conducted immediately prior to the referendum reveals the following:

  • Telephone polls: Overall, the average for these polls predicted a 51% majority in favour of remain.
  • Online polls: Overall, the average for these polls predicted a win for the leave vote by 50.5% (in fact it was 51.9%)

On the surface of things, the online polls appear to provide the more accurate prediction.  However, it’s not quite that simple.

Online polls are cheaper to conduct than telephone polls.  As a result, online polls can often afford to use larger samples.  This reduces the level of statistical error.  In the run up to the referendum the average online poll used a sample of 2,406 vs. an average of 1,038 for telephone polls.

The greater accuracy of the online polls over this period could therefore be largely explained simply by the fact that they were able to use larger samples.  As telephone is a more expensive medium, it is undeniably easier to achieve a larger sampler via the online route.

Accuracy over time

You might expect that, as people get nearer to the time of an election, they are more likely to come to a decision as to how they will vote.

However, our basket of polls in the month leading up to the Brexit vote shows no signs of the level of people who were ‘undecided’ changing.  During the early part of the month around 10% consistently stated they had not decided.  Closer to the referendum, this number remained much the same.

However, when we look at polls conducted in early June vs polls conducted later, we see an interesting contrast.  As it turns out, polls conducted early in June predicted a result closer to the actual result than those conducted closer to the referendum.

In fact, it seems that polls detected a shift in opinion that seems to have occurred around the time of the assassination of the MP, Jo Cox.

Chart of brexit result v polls taken before and after the killing of Jo Cox

Clearly, the average for the early month polls predicts a result very close to the final one.  The basket of later polls however, despite the advantage of larger samples, are off the mark by a significant margin.  It is these later polls that reinforced the impression in some people’s minds that the country was likely to vote Remain.

But why?

Reasons for mis-prediction

Of course, it is difficult to explain why surveys seemed to show a result that was a little way off the final numbers so close to the event. 

If we look at opinion surveys conducted several months before the referendum, then differences become easier to explain.  People change their minds over time and other people who are wavering will make up their minds.

A referendum conducted in January 2016 would have delivered a slightly different result to the one in June 2016.  This would be purely because a slightly different mix of people would have voted.  Also, because some people would have held a different opinion in January to that which they held in June.

However, by June 2016, you’d expect that a great many people would have made up their minds.

Logically, however, there are four reasons I can think of as to why there might be a mis-prediction by polls conducted during this period:

  1. Explainable statistical error margins.
  2. Unrepresentative approaches.
  3. Expressed intentions did not match actual behaviour.
  4. “Opinion Magnification”.

Explainable statistical error margins

Given the close nature of the vote, this is certainly a factor.  Polls of the size typically used here would find it very difficult to precisely predict a near 50/50 split. 

51.9% voted Leave.  A poll of 2000 could easily have predicted 49.7% (a narrow reverse result) and still be within an acceptable statistical error margin. 

18 of the 31 polls (58%) conducted in June 2016 returned results within the expected margin of statistical error vs the final result.  If they got the result wrong (which 3 did), this can be explained purely by the fact that the sample size was not big enough.

However, this means that 13 returned results that can’t be accounted for by expected statistical error alone. 

If we look at surveys conducted in early June, 6 returned results outside the expected bounds of statistical variance.  However, this was usually not significantly outside those bounds (just 0.27% on average). 

The same cannot be said of surveys conducted later in June.  Here polls were getting the prediction wrong by an average of 1.28% beyond the expected range.  All the surveys (7 in total) that predicted a result outside of the expected statistical range, consistently predicted a Remain win.

This is too much of a coincidence.  Something other than simple statistical error must have been at play.

Unrepresentative approaches

Not everyone is willing (or able) to answer Opinion Polls. 

Sometimes a sample will contain biases.  People without landlines would be harder to reach for a telephone survey.  People who never or rarely go online will be less likely to complete online surveys.

These days many pollsters make a point of promising a ‘quick turnaround’.  Some will boast that they can complete a poll of 2,000 interviews online in a single day.  That kind of turnaround is great news for a fast-paced media world but will under-represent infrequent internet users.

ONS figures for 2016 showed that regular internet use was virtually universal amongst the under 55s.  However, 12% of 55–64-year-olds, 26.9% of 65–74-year-olds and 61.3% of the over 75s had not been online for three months in June 2016. Older people were more likely to vote Leave.  But were the older people who don’t go online more likely to have voted Leave than those who do?

It is hard to measure the effect of such biases.  Was there anything about those who could not / would not answer a survey that means they would have answered differently?  Do they hold different opinions?

However, such biases won’t explain why the surveys of early June proved far better at predicting the result than those undertaken closer to the vote. 

Expressed intention does not match behaviour

Sometimes, what people do and what they say are two different things.  This probably doesn’t apply to most people.  However, we all know there are a few people who are unreliable.  They say they will do one thing and then go ahead and do the opposite.

Also, it is only human to change your mind.  Someone who planned to vote Remain in April, might have voted Leave on the day.  Someone undecided in early June, may have voted Leave on the day.  And some would switch in the other direction.

Without being able to link a survey answer to an actual vote, there is no way to test the extent to which peoples stated intentions fail to match their actual behaviour.

However, again, this kind of switching does not adequately explain the odd phenomenon we see in June polling.  How likely is it that people who planned to vote Leave in early June, switched to Remain later in the month and then switched back to Leave at the very last minute?  A few people maybe, but to explain the pattern we see, it would have to have been something like 400,000 people.  That seems very unlikely.

The Assassination of Jo Cox

This brings us back to the key event on 16 June – the assassination of Jo Cox.  Jo was a labour politician who strongly supported the Remain campaign and was a well-known champion of ethnic diversity.  Her assassin was a right-wing extremist who held virulently anti-immigration views.

A significant proportion of Leave campaigners cited better immigration control as a key benefit of leaving the EU.  Jo’s assassin was drawn from the most extremist fringe of such politics.

The boost seen in the Remain vote recorded in the polls that followed her death were attributed at the time to a backlash against the assassination.  That some people, shocked by the implications of the incident, were persuaded to vote Remain.  Doing so might be seen by some as an active rejection of the kind of extreme right-wing politics espoused by Jo’s murderer.

At the time it seemed a logical explanation.  But as we now know, it turned out not to be the case on the day.

Reluctant Advocates

There will be some people who will, by natural inclination, keep their voting intentions secret. 

Such people are rarely willing to express their views in polls, on social media, or even in conversation with friends and family.  In effect they are Reluctant Advocates.  They might support a cause but are unwilling to speak out in favour of it.  They simply don’t like drawing attention to themselves.

There is no reason to suspect that this relatively small minority would necessarily be skewed any more or less to Leave or Remain than everyone else.  So, in the final analysis, it is likely that the Leave and Remain voters among them will cancel each other out. 

The characteristic they share is a reluctance to make their views public.  However, the views they hold beyond this are not necessarily any different from most of the population.

An incident such as the assassination of Jo Cox can have one of two effects on public opinion (indeed it can have both):

  • It can prompt a shift in public opinion which, given the result, we now know did not happen.
  • It can prompt Reluctant Advocates to become vocal, resulting in a phenomenon we might call Opinion Magnification.

Opinion Magnification

Opinion Magnification creates the illusion that public opinion has changed or shifted to a greater extent than it actually has.  This will not only be detected in Opinion Polls but also in social media chatter – indeed via any media through which opinion can be voiced.

The theory being that the assassination of Jo Cox shocked Remain supporting Reluctant Advocates into becoming more vocal.  By contrast, it would have the opposite effect on Leave supporting Reluctant Advocates. 

The vast majority of Leave voters would clearly not have held the kind of extremist views espoused by Jo’s assassin.  Indeed, most would have been shocked and would naturally have tried to distance themselves from the views of the assassin as much as possible.  This fuelled the instincts of Leave voting Reluctant Advocates to keep a low profile and discouraged them from sharing their views.

If this theory is correct, this would explain the slight uplift in the apparent Remain vote in the polls.  This artificial uplift, or magnification, of Remain supporting opinion would not have occurred were it not for the trigger event of 16 June 2016.

Of course, it is very difficult to prove that this is what actually occurred.  However, it does appear to be the only explanation that fits the pattern we see in the polls during June 2016.

Conclusions

Given the close result of the 2016 referendum, it was always going to be a tough prediction for pollsters.  Most polls will only be accurate to around +/- 2% anyway, so it was ever a knife edge call.

However, in this case, in the days leading up to the vote, polls were not just out by around 2% in a few cases.  They were out by a factor of around 3%, on average, predicting a result that was the reverse of the actual outcome.

Neither statistical error, potential biases nor any disconnect between stated and actual voting behaviour can adequately account for the pattern we saw in the polls. 

A more credible explanation is distortion by Opinion Magnification prompted by an extraordinary event.  However, as the polling average shifted no more than 2-3%, the potential impact of this phenomenon appears to be quite limited.  Indeed, in a less closely contended vote, it would probably not have mattered at all.

Importantly, all this does not mean that polls should be junked.  But it does mean that they should not be viewed as gospel.  It also means that pollsters and journalists need to be alert for future Opinion Magnification events when interpreting polling results.

About Us

Synchronix Research offers a full range of market research services, polling services and market research training.  We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.

For any questions or enquiries, please email us: info@synchronixresearch.com

You can read more about us on our website.  

You can catch up with our past blog articles here.

Sources, references & further reading:

How the pollsters got it wrong on the EU referendum, Guardian 24 June 2016

ONS data on internet users in the UK

Polling results from Opinion Polls conducted prior to the referendum as collated on Wikipedia

FiveThirtyEight – for Nate Silver’s views on polling accuracy

Inputting credit card data onto a laptop

Working with Digital Data Part 2 – Observational data

One of the most important changes brought about by the digital age is the availability of observational data.  By this I mean data that relates to an observation of actual online consumer behaviour.  A good example would be in tracing the journey a customer takes when buying a product.

Of course, we can also find a lot of online data relating to attitudes and opinions but that is less revolutionary.  Market Research has been able to provide a wealth of that kind of data, more reliably, for decades.

Observational data is different – it tells us about what people actually do, not what they think (or what they think they do).  This kind of behavioural information was historically very difficult to get at any kind of scale without spending a fortune.  Not so now.

In my earlier piece I had a look at attitudinal and sentiment related digital data.  In this piece I want to focus on observational behavioural data, exploring its power and its limitations.

Memory vs reality

I remember, back in the 90s and early 2000s, it was not uncommon to be asked to design market research surveys aimed at measuring actual behaviour (as opposed to attitudes and opinions). 

Such surveys might aim to establish things like how much people were spending on clothes in a week, or how many times they visited a particular type of retail outlet in a month, etc.  This kind of research was problematic.  The problem lay with people’s memories.  Some people can recall their past behaviour with exceptional accuracy.  However, others literally can’t remember what they did yesterday, let alone recall their shopping habits over the past week.

The resulting data only ever gave an approximate view of what was happening BUT it was certainly better than nothing.  And, for a long time, ‘nothing’ was usually the only alternative.

But now observational data, collected in our brave new digital world, goes some way to solving this old problem (at least in relation to the online world).  We can now know for sure the data we’re looking at reflects actual real-world consumer behaviour, uncorrupted by poor memory.

Silver Bullets

Alas, we humans are indeed a predictable lot.  New technology often comes to be regarded as a silver bullet.  Having access to a wealth of digital data is great – but we still should not automatically expect it to provide us with all the answers.

Observational data represents real behaviour, so that’s a good starting point.  However, even this can be misinterpreted.  It can also be flawed, incomplete or even misleading.

There are several pitfalls we ought to be mindful of when using observational data.  If we keep these in mind, we can avoid jumping to incorrect conclusions.  And, of course, if we avoid drawing incorrect conclusions, we avoid making poor decisions.

Correlation in data is not causation

It may be an old adage in statistics, but it is even more relevant today, than ever before.  For my money, Nate Silver hit the nail on the head:

“Ice cream sales and forest fires are correlated because both occur more often in the summer heat. But there is no causation; you don’t light a patch of the Montana brush on fire when you buy a pint of Haagan-Dazs.”

[Nate Silver]

Finding a relationship in data is exciting.  It promises insight.  But, before jumping to conclusions, it is worth taking a step back and asking if the relationship we found could be explained by other factors.  Perhaps something we have not measured may turn out to be the key driver.

Seasonality is a good example.  Did our sales of Christmas decorations go up because of our seasonal ad-campaign or because of the time of year?  If our products are impacted by seasonality, then our sales will go up at peak season but so will those of our competitors.  So perhaps we need to look at how market share has changed, rather than basic sales numbers, to see the real impact of our ad campaign.

Unrepresentative Data

Early work with HRT seemed to suggest that women on HRT were less susceptible to heart disease than other women.  This was based on a large amount of observed data.  Some theorised that HRT treatments might help prevent heart disease. 

The data was real enough.  Women who were on HRT did experience less heart disease than other women.

But the conclusion was utterly wrong.

The problem was that, in the early years of HRT, women who accessed the treatment were not representative of all women. 

As it turned out they were significantly wealthier than average.  Wealthier women tend to have access to better healthcare, eat healthier diets and are less likely to be obese.  Factors such as these explained their reduced levels of heart disease, not the fact that they were on HRT.

Whilst the completeness of digital data sets is improving all the time, we still often find ourselves working with incomplete data.  Then it is always prudent to ask – is there anything we’re missing that might explain the patterns we are seeing?

Online vs Offline

Naturally, digital data is a measure of life in the online world.  For some brands this will give full visibility of their market since all, or mostly all, of their customers primarily engage with them online.

However, some brands have a complex mix of online and offline interactions with customers.  As such it is often the case that far more data exists in relation to online behaviour than to offline.  The danger is that offline behaviour is ignored or misunderstood because too much is being inferred from data collected online.

This carries a real risk of data myopia, leading to us becoming dangerously over-reliant on insights gleaned from an essentially unrepresentative data set. 

Inferring influence from association

Put simply – do our peers influence our behaviour?  Or do we select our peers because their behaviour matches ours?

Anna goes to the gym regularly and so do most of her friends.  Let’s assume both statements are based on valid observation of their behaviour.

Given such a pattern of behaviour it might be tempting to conclude that Anna is being influenced by ‘herd mentality’. 

But is she? 

Perhaps she chose her friends because they shared similar interests in the first place, such as going to the gym? 

Perhaps they are her friends because she met them at the gym?

To identify the actual influence, we need to understand the full context.  Just because we can observe a certain pattern of behaviour does not necessarily tell us why that pattern exists.  And if we don’t understand why a certain pattern of behaviour exists, we cannot accurately predict how it might change.

Learning from past experiences

Observational data measures past behaviour.  This includes very recent past behaviour of course (which is part of what makes it so useful).  Whilst this is a useful predictor of future behaviour, especially in the short term, it is not guaranteed.  Indeed, in some situations, it might be next to useless. 

But why?

The fact is that people (and therefore markets) learn from their past behaviour.  If past behaviour leads to an undesirable outcome they will likely behave differently when confronted with a similar situation in future.  They will only repeat past behaviour if the outcome was perceived to be beneficial.

It is therefore useful to consider the outcomes of past behaviour in this light.  If you can be reasonably sure that you are delivering high customer satisfaction, then it is less likely that behaviour will change in future.  However, if satisfaction is poor, then there is every reason to expect that past behaviour is unlikely to be repeated. 

If I know I’m being watched…

How data is collected can be an important consideration.  People are increasingly aware their data is being collected and used for marketing purposes.  The awareness of ‘being watched’ in this way can influence future behaviour.  Some people will respond differently and take more steps than others to hide their data.

Whose data is being hidden?  Who is modifying their behaviour to mitigate privacy concerns?  Who is using proxy servers?  These questions will become increasingly pressing as the use of data collected digitally continues to evolve.  Will a technically savvy group of consumers emerge who increasingly mask their online behaviour?  And how significant will this group become?  And how different will their behaviour be to that of the wider online community?

This could create issues with representativeness in the data sets we are collecting.  It may even lead to groups of consumers avoiding engagement with brands that they feel are too intrusive.  Could our thirst for data, in and of itself, put some customers off?  In certain circumstances – certainly yes.  This is already happening.  I certainly avoid interacting with websites with too many ads popping up all over the place.  If a large ad pops up at the top of the screen, obscuring nearly half the page, I click away from the site immediately.  Life is way too short to put up with that annoying nonsense.

Understanding why

By observing behaviour, we can see, often very precisely, what is happening.  However, we can only seek to deduce why it is happening from what we can see. 

We might know that person X saw digital advert Y on site Z and clicked through to our website and bought our product.  Those are facts. 

But why did that happen?

Perhaps the advert was directly responsible for the sale.  Or perhaps person B recommended your product to person X in the bar, the night before.  Person X then sees your ad the next day and clicks on it.  However, the truth is that the ad only played a secondary role in selling the product – an offline recommendation was key.  Unfortunately, the key interaction occurred offline, so it remained unobserved.

Sometimes the only way to find out why someone behaved in a certain way is to ask them.

Predicting the future

Forecasting the future for existing products using observational data is a sound approach, especially when looking at the short-term future.

Where it can become more problematic is when looking at the longer term.  Market conditions may change, competitors can launch new offerings, fashions shift etc.  And, if we are looking to launch a new product or introduce a new service, we won’t have any data (in the initial instance) that we can use to make any solid predictions.

The question we are effectively asking is about how people will behave and has little to do with how they are behaving today.  If we are looking at a truly ground-breaking new concept then information on past behaviour, however complete and accurate, might well be of little use.

So, in some circumstances, the most accurate way to discover likely future behaviour is to ask people.  What we are trying to do is to understand attitudes, opinions, and preferences as they pertain to an (as yet) hypothetical future scenario.

False starts in data

One problematic area for digital marketing (or indeed all marketing) campaigns is false starts.  AI tools are improving in their sophistication all the time.  However, they all work in a similar way:

  • The AI is provided with details of the target audience.
  • The AI starts with an initial experiment,
  • It observes the results,
  • Then it modifies the approach based on what it learns. 
  • The learning process is iterative, so the longer a campaign runs, the more the AI learns, the more effective it becomes.

However, how does the AI know what target audience it should aim for in the initial instance?  In many cases the digital marketing agency determines that based on the client brief.  That brief is usually written by a human which should (ideally) provide a clear answer to the question “what is my target market?”

That tells the Agency and, ultimately, the AI, who it should aim for.

However, many people, unfortunately, confuse the question “what is my target market?” with “what would I like my target market to be in an ideal world?”  This is clearly a problem and can lead to a false start.

A false start is where, at the start of a marketing campaign, the agency is effectively told to target the wrong people.  Therefore, the AI starts by targeting the wrong people and has a lot of learning to do!

A solid understanding of the target market in the first instance can make all the difference between success and failure.

Balancing data inputs

The future will, no doubt, provide us with access to an increased volume, variety, and better-quality digital data.   New tools, such as AI, will help make better sense of this data and put it to work more effectively.  The digital revolution is far from over.

But how, when, and why should we rely on such data to guide our decisions?  And what role should market research (based on asking people questions rather than observing behaviour) play?

Horses for courses

The truth is that observed data acquired digitally is clearly better than market research for certain things. 

Most obviously, it is better at measuring actual behaviour and using it for short-term targeting and forecasting. 

It is also, under the right circumstances, possible to acquire it in much greater (and hence statistically reliable) quantity.  Crucially (as a rule) it is possible to acquire a large amount of data relatively inexpensively, compared to a market research study.

However, when we are talking about observed historic data it is better at telling us ‘what’, ‘when’ and ‘how’ than it is at telling us ‘why’ or ‘what next’.  We can only look to deduce the ‘whys’ and the ‘what next’ from the data.  In essence it measures behaviour very well, but determines opinion, as well as potential shifts in future intention, poorly. 

The role of market research

Question based market research surveys are (or at least should be) based on structured, representative samples.  It can be used to fill in the gaps we can’t get from digital data – in particular it measures opinion very well and is often better equipped to answer the ‘why’ and ‘what next’ questions than observed data (or attitudinal digital data). 

Where market research surveys will struggle is in measuring detailed past behaviour accurately (due to the limitations of human memory), even if it can measure it approximately. 

The only reason for using market research to measure behaviour now is to provide an approximate measure that can be linked to opinion related questions measured on the same survey.  To be able to tie in the ‘why’ with the ‘what’

Thus, market research can tell us how the opinions of people who regularly buy products in a particular category are different from less frequent buyers.  Digital data can usually tell us, more accurately who has bought what and when – but that data is often not linked to attitudinal data that explains why.

Getting the best of both data worlds

Obviously, it does not need to be an either/or question.  The best insight comes from using digital data in combination with a market research survey.

With a good understanding of the strengths and weaknesses of both approaches it is possible to obtain invaluable insight to support business decisions.

About Us

Synchronix Research offers a full range market research services and market research training.  We can also provide technical content writing services.

You can read more about us on our website.  

You can catch up with our past blog articles here.

If you like to get in touch, please email us.

Sources, references & further reading:

Observational Data Has Problems. Are Researchers Aware of Them? GreenBook Blog, Ray Poynter, October 2020

Pictorial representation of digital data

Working with Digital Data Part 1 – Sentiment and Opinion

Our world is changing faster than ever before.  The digital data revolution is transforming the way we communicate, the way we shop and the way we live. 

One consequence of this is an explosion of data.  We’re collecting more and more of it with each passing month.  Data about our online behaviour, our interests, our opinions and just about every imaginable aspect of our daily lives. When cleverly used it provides a powerful and timely understanding of customer needs, leading to more effective decision making.

However, all this is not a silver bullet.  Understanding the challenges and limitations of digital data as well as its benefits is critical for getting the most out of it.

Differentiating between data types

Whether we are talking about digital data or data collected offline, we should start by drawing a distinction between observational data and attitudinal data.

  • Observational data:  This comes from the observation of behaviour.  In that sense it is an actual record of genuine consumer behaviour in a real-world setting.
  • Attitudinal data:  This encompasses any form of stated opinion or sentiment.  It represents the expressed preferences, likes and dislikes of customers.  It may or may not reflect actual behaviour, but it does represent a customer’s articulation of their view of the world.

What vs Why

In the past (especially pre-digital) measuring actual consumer behaviour through observation was very challenging.  Most brands only possessed partial data of this sort, perhaps only on an unrepresentative sub-set of their customers.  Such gaps could only usually be plugged (if at all) with expensive market research surveys.  Even then, the foibles of human memory limited the reliability of such information.

But now observational data collected digitally provides far greater granularity in terms of what consumers are doing online, as well as when, and how.

However, to fully understand such behaviour, we usually need to additionally explore why people are behaving in a certain way.  Sometimes observational data can help shed light on this.  We can deduce some of the reasons for specific decisions based on the relationships we find in the data.  However, sometimes this is not possible.  And sometimes it might be difficult to tell whether we are seeing causation or just correlation. 

This is where attitudinal data becomes important – where customers overtly tell us the ‘why’ behind their decisions.  This kind of data also has its limitations.  However, when used in combination with observational data, it can provide us with powerful insight.

Observational and attitudinal data are clearly quite different.  Each offers us different forms of insight.  Indeed, there is a lot to be said about both.  So, in this blog piece, I’ll be focusing mainly on attitudinal data (and leaving a discussion of observational data to my next piece).

Opinion and sentiment

In a digital context, attitudinal data will consist of consumer opinions expressed on social media, product reviews and recommendations and so forth.

Social media listening tools of various kinds can trawl through the web looking for this kind of data.  This builds up a picture of sentiment and draws attention to key opinions about brand image and performance. 

It can be very useful, but it clearly has potential drawbacks:

  • It may not be a representative view of the market.  Most social media platforms will have their skews and biases in terms of who engages with them.
  • There is a danger we’re listening to the opinions of a vocal minority.  But, what about the silent majority?
  • As a significant amount of information is not volunteered anonymously (as it would be on a survey) many people are only willing to say what they are happy to put their names to.  They will often avoid volunteering opinions they feel might be unpopular or controversial.

Despite this, it nevertheless offers clear advantages in terms of ease of access, data volume, and the relatively low cost of data acquisition. 

Using digital data to measure sentiment

But just how accurate is sentiment analysis using digital data of this sort?  How can we gauge its reliability?

For those on a tight budget who want some answers fast it might be the only option.  For this reason, it becomes more important to understand just how reliable it is.  And critically, how can we use it wisely?

The answer here is a complex one.  It rather depends on what digital data we are talking about and how we want to use it.  It would be as naïve to dismiss it as entirely as it would be to accept it unquestioningly.

Let’s take one simple question as an illustration: “Can sentiment data on social media be used to predict an election result?”  And, if we attempt to do so, is it any more (or less) accurate than an opinion survey?

Can Twitter predict elections?

In one German study published in Social Science Computer Review, in 2013, the conclusion was that Twitter was a poor predictor of overall population sentiment.  The Twitter population was found to contain significant biases that made it unrepresentative.  That, coupled with the crude nature of sentiment analysis tools at that time, rendered it a poor predictor for an election.

A more recent and comprehensive study (2020) has shown that things have moved on since then.  As sentiment analysis tools have improved and by using them in combination with socio-economic modelling, it was possible to predict the 2016 US election results in a single state with 81% accuracy. 

That’s an improvement but, as the study observed, there’s still a way to go.  Accuracy only became reasonable with the application of more advanced tools and modelling techniques.  Standard solutions available at the time of the study would not cut it.

The population biases in the Twitter community are still not fully understood and the data that can be harvested from it is incomplete.  It was also noted that sentiment analytics still struggles to cope with more nuanced comment.  So digital analytics and data harvesting is getting better but still has some way to go.

The value of attitudinal digital data

Although we might have a way to go before we can realistically use digital data to predict elections, that does not mean it has no merit (or that it won’t eventually get there).

It can still tell us a lot about what people are thinking and what they like/dislike about a particular product or service.  It can still help us form hypotheses about our brand image or the reasons behind customer satisfaction / dissatisfaction. 

Its value is even greater if we bear in mind that we are dealing with inherently anecdotal data.

But how can such a large volume of information be inherently anecdotal you may well ask? 

The reality is that opinions about products and services expressed online are representative only of the opinion of a vocal minority.  That does not mean it is not useful.  It can provide us with insight into the spectrum of different views and opinions about a particular brand or service.  In that sense it can do for us a similar job to a focus group (although it might potentially be less representative). 

So we can still make use such data, if we bear in mind that we are dealing with a self-selected sample and treat it accordingly.

Self-selection

One of the problems identified by studies of social media platforms is that active users on these platforms represent a sub-set of the population.  Potentially (but not necessarily always) this is a highly unrepresentative sub-set. 

To use information from such an audience, we need to understand the composition of the population and mitigate any built-in biases (a potentially complex task).  If we cannot, then it may well be misleading to attempt to use the data quantitatively.  But we can still use it qualitatively. 

By using it qualitatively, I mean it can give us a good idea of the spectrum of different opinions that exist out there in the market.  It cannot tell us how commonly held these opinions are in relation to each other in the wider population.  But it can give us a sense of the kind of things people are thinking and saying.

This, in fact, may well give us enough information to design a market research survey to quantify the prevalence of these opinions with a truly representative sample.

The digital data elephant in the room

A growing challenge in the digital world (and often the elephant in the room) is the impact of bots and false actors.  This might affect everything from measures of hits to websites, through to opinion spam, fake recommendations, and bogus ratings. 

As consumers become more aware of this phenomenon their behaviour will change accordingly.  Already some are now more wary of glowing product reviews.  Someone recently told me they now ignore five-star reviews as they are “probably written by employees of the company”.

Elon Musk’s recent controversial exit from his Twitter acquisition was ostensibly driven by his belief that up to 20% of Twitter accounts are fake (Twitter would argue it’s under 5%).  The true number is extremely difficult to measure and, of course, this problem does not only affect Twitter but all social media.

Bots and false actors

However, the absolute number of bots on a social media platform is not the key point here.  It is what the bots are doing that really matters.  For example, one analysis showed that the proportion of bots actively disseminating information about cryptocurrencies was far higher than the numbers engaged in discussions about cats.

We are not just talking about bots here.  There are also false actors to consider.  Such accounts are real people, but the identities are false (or misrepresented).  A simple example would be someone writing a review pretending to be a customer when they are the seller using a false identity. When it comes to matters of political opinion, false actors of this sort already represent a significant problem.

Distinguishing genuine public opinion from false actors (bots or human) is a challenge that needs continual vigilance.  It can muddy the waters of any attitudinal data appearing online.  My guess is that tackling this challenge will be one of the most important tasks facing the digital world over the next decade.

Missing information

One of the limitations of attitudinal data available online is simply that we are restricted to using what is available.  Here there are four specific constraints that brands may encounter:

  • People create the content they feel is important – not necessarily what brands need to know.  The fact that this content is a spontaneous offering of customer opinion is a real boon.  However, sometimes brands want to know things about their markets that few people are openly discussing.
  • Large, well-known, brands are likely to attract significant online comment.  They will therefore have access to a large pool of data from which they can draw insight.  However, smaller or more niche brands are less likely to be discussed online and therefore have access to less data.
  • Most online discussions concern the here and now and the immediately foreseeable future.  People aren’t going to comment on future products and services they are not yet aware of. 
  • Finally, people like to discuss things that interest and engage them online.  Hence there is more discussion of cats than there is of insurance policies.

Depending on the market, the brand and the category, the amount of valuable content available to harvest will clearly vary.  Sometimes there will be gaps.  Sometimes those gaps will be significant.

When we need to turn to market research

Despite its limitations, attitudinal digital data certainly has its place.  It may not be perfect but sometimes we don’t need perfect.  Sometimes getting a rough idea cheaply and quickly will give us 80% of what we need.

However, sometimes attitudinal digital data can’t give us what we need.  Sometimes it is simply too unrepresentative.  Sometimes it is too incomplete.  And sometimes the issues we need to know about are simply not being discussed.  That’s when we need to turn to market research.

When it comes to measuring sentiment and opinion, a market research survey provides the best way to fill in such gaps. 

Both market research and digital analytics have their place, and it is not an either/or choice.  Sometimes using both in combination will deliver the best results.

The power of ‘observed’ digital data

However, the real strength of the digital information available to us today lies in data that is purely observational rather than attitudinal. 

Measurement of actual consumer behaviour online, that leads directly to an online purchase, represents an irrefutable record of genuine buying decisions.  Such direct observation of buyer behaviour at scale provides us with a wealth of insight that was simply unavailable prior to the digital age.

Next time, I’ll be taking a closer look at observational data.  I’ll be considering what it can tell us, its strengths and the potential pitfalls we should look out for when working with it.

About Us

Synchronix Research offers a full range market research services and market research training.  We can also provide technical content writing services.

You can read more about us on our website.  

You can catch up with our past blog articles here.

If you like to get in touch, please email us.

Sources, references & further reading:

Can We Forecast Presidential Election Using Twitter Data? An Integrative Modelling Approach – Ruowei Liu

Trying to predict the election? Forget about Twitter, study concludes, Guardian 2016

How many bots are on Twitter? The question is difficult to answer and misses the point.  May 2022. The Conversation.

Hands holding tablet and watching Youtube

The Visual Communications Age

The past few years has seen a boom in visual communications across social media.  An estimated 2.3 billion people now use YouTube every month.  Instagram and TikTok have around 1 billion monthly users each.

Visual social media of this kind – be it in the form of still images or video clips – are transforming the way in which we communicate.  Part of this change is simply a function of accessibility.  Technology has made it far easier for people to create visual images and make short video clips and mini films than ever was the case, even ten years ago.  And now there are more social media outlets than ever before where it is possible to publish such material.

It is incredible to think that twenty years ago Facebook, YouTube and Twitter did not even exist.  How much the world has changed!

However, we should not be tempted to think that social media platforms will continue to grow forever.  There is a finite limit to the number of users any platform can attract, after all.  Like in any other market, market growth will inevitably give way to market maturity at some point.

Platform maturity

Facebook’s owner Meta Platforms recently recorded a record daily loss on the stock market.  This came in the wake of the news that Facebook’s Daily Active Users fell to 1.929bn in the three months to the end of December. This compares to 1.930bn in the previous quarter.

This is the first time Facebook has experienced such a fall; a clear sign that this particular platform is reaching its mature phase.  Of course, it was bound to happen eventually.  After all, there are only so many active daily users you can have from a global population of 7.7 billion (some of whom do not have good internet access).

Rising Platforms

TikTok’s owner ByteDance, by contrast, saw revenues grow by 70% in 2021 (although even this is slower than the spectacular growth seen previously).

Facebook is primarily about written communication, albeit pictures, images and gifs are often shared on the platform.  TikTok is, of course, mainly about the short form video clip.  The BBC recently reported that Facebook’s owner has warned of pressures on revenues precisely because of stiffer competition from TikTok and YouTube.

Are these signs, therefore, of a wider trend?  Are we seeing a real sea-change in the way in which we communicate?  A transition from a culture of communication based on the written word to one where visual images and video become the dominant mode of interaction?

A visual future?

Are these portents of things to come?  Of a world where communication is primary achieved with the video clip and the streamed podcast?  Some would argue it is already happening, after all it is now quite easy for anyone to broadcast their own content on YouTube, TikTok or Twitch and it will only become easier with each passing year.  Now everyone is a content publisher.

There are also signs of generational differences.  Anecdotally we are hearing that younger people are more likely to engage with social media like TikTok and YouTube.  Social media such as Facebook, with its higher reliance on written content, still has an appeal for older generations but is, perhaps, less suited for a generation addicted to the video clip. 

But can we put any hard numbers to these claims?

Generational differences

A Synchronix survey from last year looked at social media use amongst gamers.  We wanted to understand the extent to which people of different ages engaged with social media to discuss or exchange information about gaming.  The results showed some clear generational differences in terms of preference.

Graph of gamer social media preferences by age

Platforms

YouTube: Emerges as the most popular social media platform for gamers under the age of 45.  Older gamers also engage with it extensively but, for the over 45s, is relegated to the number two spot. 

Instagram: is the second most popular media with the under 25s.  It is less popular with the 24-35 age group but still ranks 3rd overall.  Its popularity clearly diminishes with age, especially amongst the over 45s.

TikTok:  If anything, TikTok illustrates the most significant generational differences of all.  It is used by nearly 40% of the under 25s, placing it neck and neck with Instagram within this age group.  This drops to 26% amongst the 25-34’s (still significant).  However, its popularity wanes markedly in older age groups.

All three brands of visual based social media reflect the same overall pattern.  Their popularity is greatest in the youngest age groups and lowest amongst the over 45s.

Facebook:  Despite the recent slight dip in use, Facebook is popular with all ages.  However, it is not even one of the three most popular platforms for the under 25s, although this soon changes when we start to consider older age groups.  It is the second most popular platform for the 25-44 age group and the most popular with the over 45s.  Its higher reliance on written content lends it greater appeal for older audiences.

Twitter: Twitter is fourth most popular in the under 25s but drops in popularity with older age groups (especially the over 45s).  This is interesting as it shows that Twitter, which is primarily text based, demonstrates that written communications retain a certain degree of popularity with the younger generation.  The short form tweet, with its soundbite feel, is still able to resonate with generation Z in a way that other forms of written communication appear to struggle to do.

The future

One thing is now clear. Visual media has become critical for effectively communicating with Gen Z.  However, they are not entirely abandoning the written word.  Their preference for Twitter above Facebook is likely influenced by a texting culture in which short soundbites are strongly preferred to longer written posts.

The recent dip in Facebook usage likely reflects this generational behaviour shift.  However, the downtick in Facebook engagement should not be exaggerated.  The fact is that Facebook remains very popular amongst the over 25s and the most important social media for engaging with the over 45s.

As newer generations of internet users reach adulthood, it is likely that different generational preferences will become increasingly marked.  Marketeers will increasingly need to adapt strategies to employ a different mix of social media channels depending on the generation of customers they are aiming to communicate with.

So, a campaign aimed at the over 45s may need to focus more on Facebook, YouTube and WhatsApp.  However, a campaign aimed at a Gen Z audience would need to take very different approach, and would do better to focus mainly on Instagram, TikTok and YouTube.

Given the rapid pace of change we have experienced in the world of social media over the past decade, we can expect further significant changes over the next few years.  The next TikTok is likely to be a platform that facilitates video and/or audio interaction rather than something more reliant on the written word.   

As Gen Z comes of age and as younger generations follow, we will move to a culture highly dependent on streaming, video communication and visual interaction.  Perhaps we will eventually see this evolve into virtual reality driven experiences.  In fact, I’m sure this will happen at some point.  And although I suspect it is still a good way off, I would not be surprised if we found ourselves living in such a world twenty years from now.

About Us

Synchronix Research offers a full range market research services and market research training.  We can also provide technical content writing services.

You can read more about us on our website.  

You can catch up with our past blog articles here.

If you like to get in touch, please email us.

Sources

https://www.bbc.co.uk/news/business-60255088#

Playbook – UK Gaming Market Report 2021, Synchronix Research

https://www.un.org/en/global-issues/population

https://backlinko.com/instagram-users

https://backlinko.com/tiktok-users

https://www.globalmediainsight.com/blog/youtube-users-statistics/

https://www.reuters.com/markets/funds/exclusive-tiktok-owner-bytedances-revenue-growth-slowed-70-2021-sources-2022-01-20/


Graph of UK hospitalisations from Covid

Omicron: Have we made it through the third wave?

Restrictions lifting

In the UK the government has recently relaxed covid restrictions imposed to contain the spread of the Omicron variant.  We have been told that the limitations imposed in recent weeks have worked.

The vaccination programme which ran through the course of last year brought us all hope of some light at the end of the tunnel.  But just as we were thinking it might all be over, along came Omicron. 

Case numbers rose significantly over the Christmas period as the new strain took its toll, and hospitalisation numbers soon followed suit.

Now that we have enough data to better assess the impact of Omicron, it seems like a good time to take a fresh look at the numbers and see what they tell us.

Bad news

The bad news is that the rise in case numbers has caused hospitalisations and (sadly) deaths, to spike once more.  The number of new hospitalisations reported each day is currently higher than at any time since the end of January 2021.  The current spike in admissions now ranks as one of three major peaks of the pandemic. 

Fortunately, it looks like new cases and the hospital admission numbers are now starting to flatten and potentially reduce.  If current trends continue, this recent wave will end up being nowhere near as a deadly as the earlier peaks.

Good News

The good news is that the protection afforded by vaccination appears to be working – and working well.  Omicron has not only caused case numbers to spike but has caused them to spike to levels that are truly unprecedented.  Yet, despite this, the levels of serious illness recorded are surprisingly low.

A look at the government data tells us that during the first 10 days of January this year an average of nearly 145,000 people were diagnosed with covid every single day.  That is a significantly higher infection rate than we have seen in any previous wave; nearly three times the level that we were seeing in the same ten-day period for January 2021 (the peak of the previous wave).

However, there is cause for cautious optimism.  The average for new infections during the first ten days of January 2022 is actually a little lower than the average for the last ten days of December 2021.  The most recent data suggests that numbers of new cases look to be on the decline.  Let’s hope this trend continues.

Graph of UK Covid numbers January 2022

Fewer hospitalisations and deaths

But whilst case numbers have reached record highs, the numbers of hospitalisations and deaths certainly have not.  The first ten days of January witnessed an average hospital admission rate of 2,230 covid patients per day.  That is significantly less than the average of 3,935 we saw during the last peak.  So, a near threefold increase in case numbers is now resulting in a much lower rate of serious illness. 

Put simply, if you get a covid diagnosis now, you are five times less likely to end up in hospital than if you’d been diagnosed with covid this time last year.

The news in terms of deaths rates also looks promising.  Despite the higher overall infection rates, death rates are now a fraction of past peak levels.  The average number of deaths recorded per day in early January was 205, compared to a number more than four times higher from a much lower infection rate at the start of 2021. 

As a result, you are 12 times less likely to die of covid if you get infected now, than was the case this time last year. However, that statistic does come with one very important caveat – vaccination.

Vaccination works

Vaccination has played a critical role in keeping hospitalisation and death rates low over the past month or so. 

The success of the vaccination programme does mask some disturbing statistics, however.  The population as a whole is certainly seeing significantly lower rates of hospitalisation and fatalities from covid.  But this is only true amongst those who are vaccinated. 

For the unvaccinated minority, the story is much bleaker.

The Intensive Care National Audit and Research Centre recorded that 61% of the patients admitted to critical care in December 2021 with confirmed covid-19 were unvaccinated. 

Government data shows that just 9.4% of the total population is now unvaccinated.  That means that 9.4% of the population account for 61% of the most seriously ill patients. 

Clearly, the lack of vaccine protection makes people significantly more vulnerable to serious illness and death.

The truth is, if you are unvaccinated, you are 15 times more likely to end up in an intensive care unit with covid-19 than someone who is vaccinated.

Weathering the storm

As January 2022 wears on, it looks increasingly likely that we have weathered the storm of the Omicron wave.  A milder (if considerably more infectious) strain, combined with widespread vaccination, has limited the impact of the third wave.

Death rates and hospitalisation rates are far lower than in previous waves.  Nevertheless, the sheer number of infections has placed significant strains on the NHS and that is a problem that we shouldn’t ignore.

That said, the data shows beyond doubt that the vaccination programme is working.  We can afford to be hopeful about the future.

About Us

Synchronix Research offers a full range market research services and market research training.  We can also provide technical content writing services.

You can read more about us on our website.  

You can catch up with our past blog articles here.

If you like to get in touch, please email us.

Sources

BMJ

UK Government coronavirus statistics

If Trump and Pope Francis were in a marketing campaign

Planning a marketing campaign for 2022?

When designing a marketing campaign, one of the first questions that needs answering is ‘who’s my target audience?’

A good understanding of target demographics (or firmographics in b2b markets) is usually a good starting point but it can only get you so far!

Many people who share the same demographics have very different needs and preferences.  And the more precisely you’re able to define your target audience the more effective your marketing campaign is likely to be.

Beyond demographics

Basic demographics might tell us that 99% of our past customers are female and aged 25-49.  Or, if we sell b2b, we might be able to say that most of our customers are SME engineering businesses.  But clearly, not all women aged 25-49 (or SME engineering businesses for that matter) are interested in our product.  So, it’s getting a step beyond this that’s often the problem.

So, how might we sharpen our focus to ensure we are targeting the right people with the right messages.

What we need is to develop a profile that includes other useful information. It might tell us something useful about how they like to buy that could affect the way in which we sell.  Or, perhaps it might tell us something about their interests that helps us to design a more effective advertising message.

Knowing what is likely to interest and engage our target audience is key.  It helps us make important decisions about messaging and message placement. 

This all sounds great in theory, but it is not without its challenges.

Pitfalls

I’m sure we’ve all heard some ‘marketing persona’ horror stories.

The trouble is that once you go beyond demographics you are looking at increasingly intangible things.  This makes it harder to pinpoint exactly what you need to focus on and what is, frankly, not relevant.

Typical problems in these cases might well include:

  • It becomes hard to see the wood for the trees, as you have so much detail. 
  • You can create some great looking personas but, when it comes to actioning them, it’s hard to see how they help you design and execute a marketing campaign.
  • With several different persona types, it looks sensible but …  you struggle to see how or why you can practically approach these groups any differently from each other.

In essence, you end up with information that you can’t action.

So how can you get it right?

Having a clear business goal

It may seem obvious but having some clear overall business objectives in mind before you even start is a critical first step. 

A vague objective like “we need to improve our marketing communications” is not only going to make it hard to define your target audience, but it also makes the design of any marketing messages very challenging.  And, of course, it makes it very difficult to measure the success of any marketing campaign.

Often the objective might simply be to boost sales.  That has the merit of being tangible, but sometimes there might be more to it than that. 

Perhaps we’ve identified our brand has an image problem with certain customers?  If this is the case, we might want to look at which customers these are and why, so that we can address these issues directly. 

Perhaps we want to attract more customers of a particular type (e.g. higher margin)?  That means we need to understand what makes these people different from other customers and how we can engage with more of them.

Our business objectives determine what kind of information we need to consider or collect to better understand our target audience.

Deciding what we need to know and why

Once we have our objectives clearly in mind, we can start to make some choices in terms of the kind of information we need.

Clearly, we are going to want to look at buyer behaviour, preferences, attitudes, and interests.  But we can’t look at everything!  So, we need to set some rules early on in terms of what’s relevant.

Ultimately, when the chips are down, we only need to know three specific things about our audience:

  • Who are the people I need to target?  And,
  • How do I reach them?  And,
  • How best can I engage with them?

A good marketing agency can certainly help with these, especially the latter two.  However, the more focused your brief, the easier it will be for them to get it right. 

So, you need to define your target audience but only in terms of information that sheds light on these three key questions – not in terms of everything!

Knowing that someone has a particular hobby is potentially interesting.  But what do we do with that information?  Does it help us assess whether we should be selling to them or not?  Can it help us target them more directly?  Does it help us design a message that engages them more effectively?  If the answer to those three questions is ‘no’, then, interesting or not, it is useless information since it is not actionable.

On the other hand, if, for example, we were considering whether to sponsor a sporting event, it could help us a lot to know how popular that sport was in our target audience.  If it is only 5% then sponsorship is hardly worth it.  If it is 90% on the other hand, then sponsorship looks like a good investment!  Then the information becomes actionable.

The key thing is to make sure that each piece of information we consider (or try to collect) has the potential to be directly related to one of our three key questions.

Information gaps

Sometimes it is possible to create a detailed, meaningful, profile of our target audience just with the information we already have.  The results of past marketing campaign activities, customer interactions with salesmen and CRM records might all help you build up a picture of your target audience.

However, sometimes you might find that there are gaps in your knowledge or aspects of customer behaviour that you simply don’t understand. 

Maybe you have some of what you need buried in your CRM somewhere that just needs digging out.  But maybe it’s information you just don’t have.  This is where market research can come in; helping to plug any gaps and provide the answers to anything you might be missing.

Developing meaningful audience personas

It is one thing having information – it is quite another making effective use of it.

Pulling it all together and sifting out what is important from what is not is an important step in itself.  The last thing you want is to end up in a situation where information overload prevents effective action.  During this sifting process it is key to keep referring to our three key questions.

One of the reasons these exercises can sometimes run into problems is when people go overboard and (for reasons ultimately unclear) develop numerous different marketing personas, each of which represents a different potential target segment.

Before you start going down that road, ask yourself this – do I even need that?

It is all too easy to divide your audience up into different audience personas.  However, although they may indeed be different in very real and measurable ways, the question is are they different in a meaningful way?  This come back to our three key tests – should I target these people?  How do I reach them? What should I say to them?  If the answers to these three questions is the same for all the different personas you create, then it is pointless having all these different personas. 

Perhaps you only need one target audience persona – unless there is a tangible business reason for having more than one, there is no need to over complicate things.  Developing multiple personas can provide valuable insight when it’s actionable but is nothing more than a confusing distraction when it’s not.

Success Criteria

When you arrive a definition of your target audience, there are some key things you need to check to make sure it’s on the right track.  More specifically, ask yourself; is it…

  • Identifiable: your definition of your target audience has a distinct / characteristic mindset that sets them apart from everyone else.
  • Significant:  it may sound obvious but there is little point in defining a target audience that only represents 1% of your market!
  • Reachable: You need to have a good idea of how to reach your audience.
  • Differentiated: Your target audience needs to be clearly differentiated from the rest of the market.  Also, IF you have ended up with more than one target persona, you need to be able to differentiate between them.
  • Actionable: i.e. it helps you design and execute a marketing campaign.
  • Has sustained relevance: you need to be confident personas aren’t based on passing fads that won’t apply two months down the line.

If your able to define your target audience in those terms, you can really make your marketing campaign a lot more focused and effective.  

About Us

Synchronix Research is a full-service market research agency.  If you have any questions about our services or would like to explore the concept of creating meaningful audience profiles further, please get in touch.

You can email us with any questions; we’d be more than happy to hear from you.

A Christmas scene

What makes a Merry Christmas?

Christmas approaches. 

With Omicron now looming we once again find ourselves facing uncertain times. 

But we shouldn’t let the Omicron grinch ruin our Christmas.  And so, this blog aims to be a bit more festive in the hopes of raising our spirits – at least a bit! 

Let’s try to think, instead, about all the things we love about Christmas.  Christmas day, Christmas dinner, Santa, nativity scenes, Christmas trees and spending time with our family.

In thinking about these things, it suddenly occurred to me – how did all these things come together to make a modern Christmas?

What makes Christmas Christmassy?

Well, Christmas is obviously a Christian festival to celebrate the birth of Christ.  I think we all get that bit.  So, it’s easy to see how the nativity fits in. 

But what about the rest of it? 

Santa?  A jolly man in a red suit from the North Pole?  I’m not sure what he would have been doing in 1st century Bethlehem.  Reindeers?  You don’t see many of those in the middle east.  Come to think of it, I’m pretty sure turkey and Xmas pud weren’t on the menu at the Bethlehem inn.

So how did we end up with the Christmas we have today?  Where did all these seemingly unrelated ‘trimmings’ come from, I wondered?

Well as I am a researcher, I should be able to find that out!

Christmas Day

Christmas day falls on 25th December.  This is the day when we celebrate Christ’s birthday.  But wait.  How do we know he was born on that day?  The Bible itself does not actually say when he was born.  So where do we get that date from?

The earliest record we have of Christmas being officially celebrated on December 25th was in 336 AD.  It was just after the Roman Emperor Constantine had converted to Christianity.  So why did the Romans pick that day?

Well, as it turns out, December was a popular time of year for festivals.  The pagan Germans celebrated Yule at around this time and the Romans themselves had Saturnalia.

Saturnalia, by Constantine’s time, ran from 17th to 23rd December.  It was a festival of the Roman God Saturn.  It was typically celebrated with banquets, private gift giving and general drunkenness.  Sound familiar?

Constantine’s new Christian regime was no doubt keen to ween people off their pagan beliefs and festivals.  So perhaps that might explain why 25th December was picked as the official day to hold a mass to mark the birth of Christ.  Or perhaps it just seemed like an obvious time of year to have a festival.

Whatever their thinking, the Romans picked 25th December as the official date for Christmas Day from 336 AD onwards.

Santa

So, how does a jolly North Pole dweller with a penchant for chimney potholing and distributing gifts to children find himself in Bethlehem at the time of Christ’s birth?

Well, obviously, he didn’t.  But don’t worry – Santa is real!

Or, at least, Saint Nicholas was real.  Saint Nicholas was born in Turkey (not the North Pole) in 270 AD.  He is officially the patron saint of sailors, merchants, archers, repentant thieves, children, brewers, pawnbrokers, unmarried people, and students.  That’s a lot to look after if you are also expected to dish out presents to every child on earth in just one night every year.

Saint Nick famously inherited a large amount of cash from his parents when they died.  However, being a devout Christian, he took all the teachings about the potential evils of wealth very seriously and decided the best thing to do was to give the money away to the poor and needy.  But he didn’t want to be tempted with pride by taking the credit for his charitable acts, so he distributed the cash at night, hooded and cloaked.

Never went near the north pole.  Probably never saw a reindeer.  Elves?  Right out.

How Saint Nick became ‘Santa’

Anyway, in 1087 AD (we think) the Spanish brought the celebration of Saint Nick’s saint’s day to the Netherlands.  In the Netherlands, they called him Sinterklaas – from which we get “Santa Claus”.  It was the Dutch who seem to have taken the gift giving aspect of Saint Nick’s story one step further and added tales about him riding across the rooftops (on a grey horse rather than in a sleigh) dishing out presents to kids.

Originally his saint’s day was set on 6th December – close enough to Christmas for him to eventually become a part of the main event. 

‘Father Christmas’

Meanwhile over the channel, in England, the English invented a character they came to call “Father Christmas”.  The earliest record of this was a carol written by the Reverend Richard Smart; most likely published sometime during the 1460s or perhaps the early 1470s.

In this carol the Rev. Smart mentioned a character called ‘Sir Christmas’ who announces Christ’s birth and encourages those who heard the good news to ‘make good cheer and be right merry’.  The good Reverend Smart was certainly an optimist since the War of the Roses was raging all around him when he wrote it!

By the time of Henry VIII, the character of Father Christmas was well established in England.  By that stage he was usually depicted as a large man dressed in robes of green or scarlet.

At some time – no one really knows exactly when – the image of Father Christmas and Sinterklaas blended together in an English-Dutch fusion to create the Santa Claus we know and love today.

Santa gets his sleigh

When Europeans began to settle the Americas in significant numbers, they brought their various Christmas stories and traditions with them.  And so it was that Santa first acquired his sleigh and his reindeer not in the North Pole, but in New York! 

In early 19th century New York, the image of Santa riding in a sleigh pulled by reindeer first appeared.   The grey horse had obviously been traded in for the sleigh and reindeer but, aside from this, New York’s Santa was the same Santa inherited from England and the Netherlands. 

An academic by the name of Clement Clarke Moore was the specific New Yorker most directly responsible for popularising the new look Santa.  He wrote a poem about Santa and his sleigh in 1823, even going so far as to give all the reindeer their names. 

It all had little to do with 3rd century Turkey, but it soon became very popular – so popular that Moore’s image of Santa with his reindeer and his sleigh stuck. 

And where do you find reindeer?  Well obviously, in places like Lapland of course!  And so it was that Santa found a home in the North Pole – all thanks to a New York Professor of Oriental and Greek Literature.

Christmas Trees

Evergreen firs are most common in northern climes, so not necessarily an obvious choice of tree for a Bethlehem scene.  So, how did Christmas trees get in on the act? 

In more pagan times evergreen trees were viewed as a symbol of life in mid-winter.  They appeared to thrive at a time when other plants died.  However, they weren’t necessarily associated with any festival.  This came much later.

There is one story that says the tradition of decorating trees for Christmas started with Martin Luther – the 16th century founder of Protestantism.  The tale goes that he was walking home one night and was awestruck by the sight of stars shining through the trees above.  He apparently decided to re-create the effect at home for his family by decorating fir trees with candles.

Whether this is true or not, the custom of decorating a Christmas tree began around this time in Germany.  However, they were not universally accepted as part of the Christmas tradition by any means.  In the 17th century, many puritans in both America and England disapproved – denouncing them as a “heathen” practice.  But 200 years later, when Queen Victoria (another German) allowed herself and her family to be sketched enjoying a family Christmas around one, they became firmly established as part of Christmas tradition.

Christmas Pudding

The very earliest Christmas puddings appeared in 14th century England.  Originally, they were a kind of porridge called “frumenty” – a savoury dish made from meat mixed with wines and fruits.  It was eaten during the preparations for Christmas but not as part of the day itself.

By the 16th century, tastes changed and it became more of a sweet pudding than a savoury dish.  Dried fruit had become more readily available and were increasingly used as a standard ingredient.

It wasn’t served as a standard dessert for Christmas Day until around 1650.  By this time, it was called “plum pudding” and much more like the recipes we use today.  The puritans of course tried to ban it on the grounds that it was “sinfully rich” in flavour.  They were probably right, but that doesn’t stop us from eating it these days!

Christmas Turkey

Turkeys were first brought to England in the early 16th century via Spanish merchants returning from South America.

There was an advantage in eating turkeys in the winter rather than killing a cow or a chicken.  A cow could provide milk through the cold months if kept alive and a chicken could provide far more eggs than a turkey.  This meant that turkeys presented a very attractive alternative for a hearty mid-winter meal.

Henry VIII was the first person to specifically include turkey in a Christmas feast.  Before then such feasts had typically included geese, boars’ heads and even peacocks!

So, this is Christmas

As it turns out, the Christmas we know today is a truly international creation – blending traditions and stories from a diverse mix of different countries and peoples. 

Romans, Dutch, Germans, English, Turks, Americans, and Spanish have all contributed their own traditions to the Christmas story.  Everyone from American academics to Turkish Saints and English Kings have all played their part.

Christmas really is for everyone!

So, raise a glass this festive season and take the advice of the good Rev. Smart to ‘make good cheer and be right merry’.

Merry Christmas everyone! 

About Us

Synchronix Research offers market research and content writing services. 

You can read more about us on our website.  

You can catch up with our past blog articles here.

If you like to get in touch, please email us.

Sources

Britannica

Christianity.com

History.com

Pudforallseasons.com.au

Squaremeal.co.uk

Thefactsite.com

Wikipedia – Saint Nicholas

Wikipedia – Saturnalia

Kindle against a mountain backdrop

Books in a digital world

These days, it seems as though virtually everything is rapidly going digital. But, as it turns out, not so much when it comes to books.

That is not to say that ebooks and downloaded audiobooks have not enjoyed significant success in recent years.  But, despite all this, the fact remains that paper formats remain highly popular.

Paper remains king

Figures from the Association of American Publishers show that ebooks and download digital audio combined to account of just 16.5% of book revenues (for consumer publications) in September 2021.  The market, in cash dollar terms, is still dominated by paper.

Some have pointed out that such figures probably exaggerate the position of physical books.  AAP figures come from major publishing houses that tend to be more paper reliant.  The self-published/indie market generates a higher proportion of digital sales that is not represented in their figures. 

Also, digital format books are usually sold for a significantly lower revenue per unit than paper.  So, volume share and revenue share will be very different animals.

However, even if we take account of these factors, print still accounts for the largest share of the market.  Estimates from the bookseller place ebooks as accounting for something closer to 19% of the market value and 36% of the volume in 2019.  You would need to add to that the share taken by the growing audible market.  Nevertheless, it’s still clear that paper remains very popular.

Why do paper books remain so popular?

Given so many other sectors have ‘gone digital’ so quickly, the ongoing resilience of the paper book market requires some explanation.

An obvious question to ask is whether this is a generational thing?  In many aspects of modern life, the older age groups have proven more reluctant to move to digital.  The same factor is likely at work here – but that does not fully explain why the non-digital option remains so popular in this market compared to others.

Others might point to the fact that there are people who struggle to read books in digital form.  Some people don’t like screen reading and some even some find that screen reading gives them headaches.  No doubt this is an issue for some but surely not that many.  Also, screen issues would not serve as a barrier for download audio.

Another factor is the fact that you rent digital books – you don’t own them.  Some people may object to this on principle and stick with paper as a result.  But how many people are even aware that they don’t own the books stored on their ebook reader? 

Of course, practically speaking, a paper book requires no battery and as you can only read one book at a time, it is almost as easy to carry around as a digital version.  So, in that sense, the e-version offers only a minimal advantage.

Emotional appeal

People do take practical considerations into account when making buying decisions, but much of our consumer behaviour is driven by emotional need rather than simple logic.  So, perhaps paper books continue to have appeal because they are, in and of themselves, appealing as a physical product.  Fans of the format like it because it has a tangibility and an aesthetic appeal in and of itself. 

The pleasure of storing a book on a shelf or of building a physical library may provide an emotional motive for preferring paper to digital for some.  The satisfaction of signposting your literary tastes to others on a train or in a coffee shop, by displaying the cover of your chosen read may also be a factor.  Even the sensory feel of a physical book may provide a subconscious motive for sticking with paper.

Preference is not an either/or choice

We asked 119 fiction readers about their preferred formats for fiction this autumn.  The results revealed that, for many people at least, it is not always an either-or choice when it comes to buying ebooks vs print.

68% say they enjoy reading fiction in ebook form but 71% also expressed an affinity for paper.  There is considerable overlap here.  Indeed, 42% of readers say they enjoy reading both ebooks and paper books.  29% express a preference for paper but not for ebooks and 26% prefer ebooks to paper.

Clearly then, 42% of the market would happily consider buying a book in either format depending on the book and the situation.

Younger readers are more open to new formats

It is true that younger readers are more willing to try newer formats.  80% of the under 45s enjoy ebooks, compared to 62% of the over 45s.  Younger readers are also more likely to be willing to try audio books (30% of the under 45s like this format, compared to 19% of the over 45s).

Paper remains a highly popular format regardless of age.  Younger readers remain significant fans of paper books and show no signs of abandoning the format any time soon.  Indeed, they are no less likely to express a fondness for paper than older readers.

Higher volume readers rely more on ebooks

If we take a look at people who say they ‘love’ reading ebooks and compare them with other people who are less keen we see some behavioural differences worth considering.

eBook lovers do tend to read more (although it is important to note that some of this will be consumed in paper form as well as digital).  71% said they read ‘very often’ as compared to just 44% of other readers. 

Naturally, if you are consuming a higher volume of books then opting for an ebook format makes more sense.  For one thing, ebooks are less expensive, so acquiring them in volume would work out at a significant saving vs paper. 

Also, if you are reading more then you are likely to be getting through more books in a shorter time frame.  Hence, whilst you are travelling, ebook readers make it easier to carry more books with you.  If you are reading less, then it’s unlikely you need to carry more than one book with you.  In fact, an infrequent reader might not see the need to carry books around at all, opting to read only when at home in bed.

The desired reading experience

However, we did find evidence to show that format preference may well be influenced by emotional/aesthetic appeal of the format rather than practicalities or demographics.  The kind of reading experience a reader is looking for influences whether they might choose to read a book in a paper or ebook form.

ebook lovers were more likely to say that they enjoyed reading books with comforting themes (32% expressed a preference for this experience, compared to only 16% of other readers).

On the other hand, ebook lovers were much less likely (7% vs 27%) to express a strong attraction to books that covered unsettling themes that really made them think. 

Could it therefore be that the electronic form exerts a greater appeal for situations where readers are looking for a relaxing and comforting reading experience?  By contrast, could the desire for paper have greater appeal in situations where a more thoughtful and challenging reading experience is desired?

There may, of course, be other emotional drivers that may cause a reader to pick paper over digital (or vice versa) that we have not yet had the opportunity to fully explore.  But it is nevertheless clear that consumer choice is dictated by factors other than practicality and function.

Digital won’t replace paper any time soon

Paper format books remain highly popular.  Significant numbers of younger readers (the majority in fact) continue to enjoy reading paper books.  So, we can safely say that we won’t be seeing any rapid migration to digital led by younger readers any time soon.

Paper continues to have enduring appeal – an appeal that may well transcend any practical advantages of the digital format and which is actually more deep rooted in the emotional experience of engaging with a paper book.

For these reasons any migration to purely digital consumption is likely to be slow.  Maybe, in time, it will accelerate.  Perhaps concerns of the environmental impact of consuming paper books might eventually tip the balance in favour of digital. But that’s something for the longer-term future.  For the immediate future, paper looks set to remain a key format.

About Us

Synchronix is a full-service market research agency.  We believe in using market research to help our clients understand how best to prepare for the future. 

You can read more about us on our website.  

If you wish to follow our weekly blog you can view all out past articles on our website here.

If you have any specific questions about our services, please contact us.

Sources

Association of American Publishers

CNBC

The Bookseller

Synchronix Market Research

Scroll to Top