Just how accurate are Opinion Polls?
Just after the Brexit referendum result became known, in late June 2016, several newspapers ran stories on how the opinion polls “got it wrong”.
Typical of these was an article in the Guardian from 24th June 2016, with the headline “How the pollsters got it wrong on the EU referendum.” In it the journalist observed:
“Of 168 polls carried out since the EU referendum wording was decided last September, fewer than a third (55 in all) predicted a leave vote.”
Of course, this is neither the first nor the last time pollsters have come in for some criticism from the media. (Not that it seems to stop journalists writing articles about opinion polls of course).
But sensationalism aside, how accurate are polls? In this article, I’ll explore how close (or far away) the polls came to predicting the Brexit result. And what lessons we might draw from this for the future.
The Brexit Result
On 23rd June 2016, the UK voted by a very narrow margin (51.9% to 48.1%) in favour of Brexit. However, if we just look at polls conducted near to the referendum, the general pattern was to predict a narrow result. In that respect the polls were accurate.
Taking an average of all these polls, the pattern for June showed an almost 50/50 split, with a slight edge in favour of the leave vote. So, polls taken near to the referendum predicted a narrow result (which it was) and, if averaged, just about predicted a leave result (which happened).

To compare the predictions with the results, I’ve excluded people who were ‘undecided’ at the time of the surveys. Since anyone still ‘undecided’ on the day would presumably not have voted at all.
Of course, the polls did not get it spot on. But that is because we are dealing with samples. Samples always have a margin of error, so cannot be expected to be spot on.
Margin of error
The average sample size of polls run during this period was 1,799 (some had sample sizes as low as 800; others, several thousands). However, on a sample size of 1,799 a 50/50 result would have a margin of error of +/- 2.3%. That means if such a poll predicts 50% of people were going to vote leave, we could be 95% confident that between 48% and 52% would vote leave.
In the end, the average of all these polls came to within 1.7% of predicting the exact result. That’s close! It’s certainly within the margin we’d expect.
You might wonder why polls don’t use bigger samples to improve the margin? If a result looks like being close, you’d think it might be worth using a large enough sample to reduce the error margin.
Why not, for example, use a sample big enough to reduce the statistical error margin to 0.2% – a level that would provide a very accurate prediction? To achieve that you’d need a sample of around 240,000! That’s a survey costing a whopping 133 times more than the typical poll! And that’s a cost people who commission polls would be unwilling to bear.
Data Collection
Not all polls are conducted in the same way, however. Different pollsters have different views as to the best ways to sample and weight their data. Most of these differences are minor and all reflect the pollster’s experience of what approaches have delivered the most accurate results in the past. Taking a basket of several polls together would create a prediction more likely to iron out any outliers or odd results resulting from such differences.
However, there is one respect in which polls fall into two potentially very different camps when it comes to methodology. Some are conducted online, using self-completed surveys, where the sample is drawn from online consumer panels. Others are conducted by telephone, using randomly selected telephone sample lists.
Both have their potential advantages and disadvantages:
- Online: not everyone is online and not everyone is easy to contact online. In particular older people might be less likely to use the internet that often. So, any online sample would under-represent people with limited internet access.
- Telephone: not everyone is accessible by phone. Many of these sample lists may be better in terms of reaching out to people with landlines than mobiles. That might make it difficult to access some younger people who have no landline, or people using the telephone preference service.
But, that said, do these potential gaps make any difference?
Online vs Telephone Polling
So, returning to the Brexit result, is there any evidence to suggest either methodology provides a more accurate result?

A simple comparison between the results predicted by the online polls vs the telephone polls conducted immediately prior to the referendum reveals the following:
- Telephone polls: Overall, the average for these polls predicted a 51% majority in favour of remain.
- Online polls: Overall, the average for these polls predicted a win for the leave vote by 50.5% (in fact it was 51.9%)
On the surface of things, the online polls appear to provide the more accurate prediction. However, it’s not quite that simple.
Online polls are cheaper to conduct than telephone polls. As a result, online polls can often afford to use larger samples. This reduces the level of statistical error. In the run up to the referendum the average online poll used a sample of 2,406 vs. an average of 1,038 for telephone polls.
The greater accuracy of the online polls over this period could therefore be largely explained simply by the fact that they were able to use larger samples. As telephone is a more expensive medium, it is undeniably easier to achieve a larger sampler via the online route.
Accuracy over time
You might expect that, as people get nearer to the time of an election, they are more likely to come to a decision as to how they will vote.
However, our basket of polls in the month leading up to the Brexit vote shows no signs of the level of people who were ‘undecided’ changing. During the early part of the month around 10% consistently stated they had not decided. Closer to the referendum, this number remained much the same.
However, when we look at polls conducted in early June vs polls conducted later, we see an interesting contrast. As it turns out, polls conducted early in June predicted a result closer to the actual result than those conducted closer to the referendum.
In fact, it seems that polls detected a shift in opinion that seems to have occurred around the time of the assassination of the MP, Jo Cox.

Clearly, the average for the early month polls predicts a result very close to the final one. The basket of later polls however, despite the advantage of larger samples, are off the mark by a significant margin. It is these later polls that reinforced the impression in some people’s minds that the country was likely to vote Remain.
But why?
Reasons for mis-prediction
Of course, it is difficult to explain why surveys seemed to show a result that was a little way off the final numbers so close to the event.
If we look at opinion surveys conducted several months before the referendum, then differences become easier to explain. People change their minds over time and other people who are wavering will make up their minds.
A referendum conducted in January 2016 would have delivered a slightly different result to the one in June 2016. This would be purely because a slightly different mix of people would have voted. Also, because some people would have held a different opinion in January to that which they held in June.
However, by June 2016, you’d expect that a great many people would have made up their minds.
Logically, however, there are four reasons I can think of as to why there might be a mis-prediction by polls conducted during this period:
- Explainable statistical error margins.
- Unrepresentative approaches.
- Expressed intentions did not match actual behaviour.
- “Opinion Magnification”.
Explainable statistical error margins
Given the close nature of the vote, this is certainly a factor. Polls of the size typically used here would find it very difficult to precisely predict a near 50/50 split.
51.9% voted Leave. A poll of 2000 could easily have predicted 49.7% (a narrow reverse result) and still be within an acceptable statistical error margin.
18 of the 31 polls (58%) conducted in June 2016 returned results within the expected margin of statistical error vs the final result. If they got the result wrong (which 3 did), this can be explained purely by the fact that the sample size was not big enough.
However, this means that 13 returned results that can’t be accounted for by expected statistical error alone.
If we look at surveys conducted in early June, 6 returned results outside the expected bounds of statistical variance. However, this was usually not significantly outside those bounds (just 0.27% on average).
The same cannot be said of surveys conducted later in June. Here polls were getting the prediction wrong by an average of 1.28% beyond the expected range. All the surveys (7 in total) that predicted a result outside of the expected statistical range, consistently predicted a Remain win.
This is too much of a coincidence. Something other than simple statistical error must have been at play.
Unrepresentative approaches
Not everyone is willing (or able) to answer Opinion Polls.
Sometimes a sample will contain biases. People without landlines would be harder to reach for a telephone survey. People who never or rarely go online will be less likely to complete online surveys.
These days many pollsters make a point of promising a ‘quick turnaround’. Some will boast that they can complete a poll of 2,000 interviews online in a single day. That kind of turnaround is great news for a fast-paced media world but will under-represent infrequent internet users.
ONS figures for 2016 showed that regular internet use was virtually universal amongst the under 55s. However, 12% of 55–64-year-olds, 26.9% of 65–74-year-olds and 61.3% of the over 75s had not been online for three months in June 2016. Older people were more likely to vote Leave. But were the older people who don’t go online more likely to have voted Leave than those who do?
It is hard to measure the effect of such biases. Was there anything about those who could not / would not answer a survey that means they would have answered differently? Do they hold different opinions?
However, such biases won’t explain why the surveys of early June proved far better at predicting the result than those undertaken closer to the vote.
Expressed intention does not match behaviour
Sometimes, what people do and what they say are two different things. This probably doesn’t apply to most people. However, we all know there are a few people who are unreliable. They say they will do one thing and then go ahead and do the opposite.
Also, it is only human to change your mind. Someone who planned to vote Remain in April, might have voted Leave on the day. Someone undecided in early June, may have voted Leave on the day. And some would switch in the other direction.
Without being able to link a survey answer to an actual vote, there is no way to test the extent to which peoples stated intentions fail to match their actual behaviour.
However, again, this kind of switching does not adequately explain the odd phenomenon we see in June polling. How likely is it that people who planned to vote Leave in early June, switched to Remain later in the month and then switched back to Leave at the very last minute? A few people maybe, but to explain the pattern we see, it would have to have been something like 400,000 people. That seems very unlikely.
The Assassination of Jo Cox
This brings us back to the key event on 16 June – the assassination of Jo Cox. Jo was a labour politician who strongly supported the Remain campaign and was a well-known champion of ethnic diversity. Her assassin was a right-wing extremist who held virulently anti-immigration views.
A significant proportion of Leave campaigners cited better immigration control as a key benefit of leaving the EU. Jo’s assassin was drawn from the most extremist fringe of such politics.
The boost seen in the Remain vote recorded in the polls that followed her death were attributed at the time to a backlash against the assassination. That some people, shocked by the implications of the incident, were persuaded to vote Remain. Doing so might be seen by some as an active rejection of the kind of extreme right-wing politics espoused by Jo’s murderer.
At the time it seemed a logical explanation. But as we now know, it turned out not to be the case on the day.
Reluctant Advocates
There will be some people who will, by natural inclination, keep their voting intentions secret.
Such people are rarely willing to express their views in polls, on social media, or even in conversation with friends and family. In effect they are Reluctant Advocates. They might support a cause but are unwilling to speak out in favour of it. They simply don’t like drawing attention to themselves.
There is no reason to suspect that this relatively small minority would necessarily be skewed any more or less to Leave or Remain than everyone else. So, in the final analysis, it is likely that the Leave and Remain voters among them will cancel each other out.
The characteristic they share is a reluctance to make their views public. However, the views they hold beyond this are not necessarily any different from most of the population.
An incident such as the assassination of Jo Cox can have one of two effects on public opinion (indeed it can have both):
- It can prompt a shift in public opinion which, given the result, we now know did not happen.
- It can prompt Reluctant Advocates to become vocal, resulting in a phenomenon we might call Opinion Magnification.
Opinion Magnification
Opinion Magnification creates the illusion that public opinion has changed or shifted to a greater extent than it actually has. This will not only be detected in Opinion Polls but also in social media chatter – indeed via any media through which opinion can be voiced.
The theory being that the assassination of Jo Cox shocked Remain supporting Reluctant Advocates into becoming more vocal. By contrast, it would have the opposite effect on Leave supporting Reluctant Advocates.
The vast majority of Leave voters would clearly not have held the kind of extremist views espoused by Jo’s assassin. Indeed, most would have been shocked and would naturally have tried to distance themselves from the views of the assassin as much as possible. This fuelled the instincts of Leave voting Reluctant Advocates to keep a low profile and discouraged them from sharing their views.
If this theory is correct, this would explain the slight uplift in the apparent Remain vote in the polls. This artificial uplift, or magnification, of Remain supporting opinion would not have occurred were it not for the trigger event of 16 June 2016.
Of course, it is very difficult to prove that this is what actually occurred. However, it does appear to be the only explanation that fits the pattern we see in the polls during June 2016.
Conclusions
Given the close result of the 2016 referendum, it was always going to be a tough prediction for pollsters. Most polls will only be accurate to around +/- 2% anyway, so it was ever a knife edge call.
However, in this case, in the days leading up to the vote, polls were not just out by around 2% in a few cases. They were out by a factor of around 3%, on average, predicting a result that was the reverse of the actual outcome.
Neither statistical error, potential biases nor any disconnect between stated and actual voting behaviour can adequately account for the pattern we saw in the polls.
A more credible explanation is distortion by Opinion Magnification prompted by an extraordinary event. However, as the polling average shifted no more than 2-3%, the potential impact of this phenomenon appears to be quite limited. Indeed, in a less closely contended vote, it would probably not have mattered at all.
Importantly, all this does not mean that polls should be junked. But it does mean that they should not be viewed as gospel. It also means that pollsters and journalists need to be alert for future Opinion Magnification events when interpreting polling results.
About Us
Synchronix Research offers a full range of market research services, polling services and market research training. We can also provide technical content writing services & content writing services in relation to survey reporting and thought leadership.
For any questions or enquiries, please email us: info@synchronixresearch.com
You can read more about us on our website.
You can catch up with our past blog articles here.
Sources, references & further reading:
How the pollsters got it wrong on the EU referendum, Guardian 24 June 2016
ONS data on internet users in the UK
Polling results from Opinion Polls conducted prior to the referendum as collated on Wikipedia
FiveThirtyEight – for Nate Silver’s views on polling accuracy