The 2015 UK Elections: Why 100% of the Polls Were Wrong

person with crystal ball

The general election of 2015 in the United Kingdom was held on May 7 to elect the country’s 56th Parliament. Voting took place in all 650 parliamentary constituencies, each electing one Member of Parliament to the House of Commons.

figure 1 100 percent wrong

Right up to election day, polls and commentators were predicting that the election was too close to call and, most likely, would result in a hung parliament. That’s what happened in 2010, when no party succeeded in winning a majority of seats and the Conservatives had to govern for the next five years in a coalition with the Liberal Democrats.

Of course, the polls and commentators were quite wrong. The Conservatives won 330 seats and 37 percent of the vote, giving them a clear 15-seat working majority.

In retrospect, the polls significantly underestimated the Conservative vote. In a table compiled by the British Broadcasting Corporation (“BBC”) (see Figure 1), 92 polls showed results ranging from 17 dead heats to three polls where the gap was 6 percent between Conservative and Labour. Most polls showed Labour leading, and not one of the 92 polls predicted the 7 percent lead the Conservatives actually would achieve. The closest was a 6 percent Conservative lead, and only two polls predicted that.

What wasn’t the cause

This was the biggest poll upset in the UK since the Conservative vote was similarly underestimated in 1992. Analysis of that result by the Market Research Society settled on three reasons as to why the polls and pollsters got it wrong:

1. Conservatives are less likely to reveal their loyalties than Labour voters (see Figure 2). This phenomenon has become known as the “Shy Tory” effect.

2. Polls can be fooled by a late swing; that is, by voters who change their mind after being polled.

3. Samples have to be sufficiently large to represent the electorate accurately. In 1992, some samples were not large enough.

However, there is no reason to suspect that any of these were significant factors in the 2015 fiasco:

Shy Tories: Most polling companies changed their methodology to account for this factor after 1992.

Late swing: Polls conducted the day before and the day of the election did not detect a swing.

Sample sizes: These varied, of course, from poll to poll, but all 92 polls in the BBC collection sampled at least 1,000 respondents. That’s enough to justify the usual error margin of ±3 percent, and some polls were much larger.

figure 2 lying tories honest greens

A fourth reason, dubbed “Lazy Labour,” also has been proposed. The idea is that Labour voters are less likely than Conservatives to turn out and vote. However, Lazy Labours, like Shy Tories, also can be compensated for, and this is not, in any case, a big enough factor to explain the gap between what the polls predicted would happen and what actually did occur.

figure 3 on other handResearch that FTI Consulting conducted in May 2015 just after the election confirmed that of all the party persuasions, Conservative voters are least likely to answer to pollsters and are least likely to be truthful if they do, confirming the Shy Tory effect (and perhaps giving rise to a new “Lying Tory” factor) (see Figure 2). As noted, the Shy Tory factor is compensated for by most pollsters. But the misleading ones may have contributed to the impression of an inconclusive result; that is, no clear majority for any party. Adding more noise, we found that young people were more likely to participate in research than older voters but less likely to be truthful.

The forecast of a hung parliament was important because, as our May research also showed, voters didn’t like the idea of a coalition government. They had had one since the election in 2010, and, for a variety of reasons, 77 percent of UK voters didn’t want a repeat. And a full 83 percent of UK managers, for instance, believed that an outright victory for one party — any party — would be better for business than a minority government or another coalition.

So what did happen?

This year, a huge chunk of the electorate seemed to be up for grabs. According to a poll FTI Consulting conducted in February, 19 percent of voters had not decided how they were going to cast their ballot, and a further 30 percent said they could be persuaded to change their mind. In other words, just a few months before the election, almost half (49 percent) of the nation could not be relied upon to vote any one way (see Figure 3).

Consequently, the published pre-election polls predicting no clear majority for any party unsettled voters and caused them to vote differently from the way they would have if those polls had forecast a clear majority. In a political manifestation of the observer effect in physics (which refers to the changes that the act of observing a phenomenon makes on it), the published polls did not reflect the public thinking as much as to help shape it, making it impossible for the polls to reflect accurately what was transpiring. The uncertainty the pollsters forecast encouraged 26 percent of voters to vote, 10 percent strongly. And it prompted 23 percent of the electorate to vote tactically, many in pursuit of a clear majority (see Figure 4).

And so the polls were upended.

figure 4 observer effect politics

What pollsters didn’t take into account

Pollsters typically ask people: "If there were an election tomorrow, which party would you vote for?" That tends to elicit a response about which party a person prefers. Usually, that’s a reasonable way to forecast results.

But this time, something else was going on.

Voting behavior was affected by what pollsters were telling people — that there would be a hung parliament. So when they cast their vote, voters asked themselves a different question from what the pollsters had asked. Instead of asking their party preference, voters asked themselves: "Which party would be likelier to win a clear majority, and how can I help make that happen?" That is, they voted tactically.

As Michael Bruter, Ph.D., a political scientist at the London School of Economics, said recently in an interview with Nature: "When you ask people who they are going to vote for, they very often think about what is best for them. But when you go back to the same people after the election and ask them who they voted for, we find that they voted much more in terms of what they think is best for the country."

So even the very last polls prior to the vote failed to predict the right result. But the single exit poll, conducted as people left the voting booths, was dead on.

The difference was between the question pollsters asked in advance and the question voters asked themselves as they picked a box to put an X in.

What pollsters should learn from 2015

For a host of reasons, including the expense of polling large numbers of people and the challenge of identifying smaller sample populations that will represent the whole, it is extremely difficult to predict election outcomes accurately.

Furthermore, the tried-and-true method of asking people which party they would vote for if the election were held tomorrow, which is both simple to understand and easy to administer, is likely to be the wrong tool today. We need to gather data that reflect a more complex political environment in which large segments of the population are voting tactically, as we saw in the recent UK election.

Bruter and others argue that it’s more effective to ask people about their values and match those with the parties. But no one would deny that such a poll would be more difficult and expensive to conduct and interpret.

And perhaps as communication — via mobile phone, text, email and social media — continues to get easier, voter sentiment becomes more volatile, with opinions and arguments proliferating and dispersing more widely and quickly. This further complicates the pollster’s task of taking a snapshot of today and predicting what that picture will mean for tomorrow.

Even if pollsters devise a way to account for all these complexities, it’s unlikely — given that their main objective is to get headline-grabbing numbers into the morning’s papers — they ever would decide that it would be worth the time and expense to do so.

The recently announced British Polling Council inquiry aims to determine what went wrong in May so that future polls can be more accurate. But the UK went through a similar examination following the 1992 election in which many polls predicted a hung parliament and the subsequent Conservative majority took pollsters by surprise. We could likely have learned the right lesson from that episode — that in the complex multi-party voting environment of the UK, a single simple question isn’t an adequate determinant of how people will vote. But we didn’t take heed. Instead, we chose to believe that, in our increasingly complex and digitally connected world, a few weighting adjustments (aka fudge factors) would suffice.

But that didn’t happen this time and probably won’t the next time. At its best, polling can help us understand different groups and factors that influence the vote. At worst, and as it seems to have done in this most recent election, it throws a spanner wrench into the works.

Research Methodology

Research was conducted online by FTI Consulting during the following time periods and is reflective of the UK general population aged 18 years and older:

February 2015 (pre election): n=2,111 respondents from February 6-9, 2015

May 2015 (post election): n=2,066 respondents from May 8-11, 2015

The standard convention for rounding has been applied, and, consequently, some totals do not add up to 100 percent. For further information about this research, please contact dan.healy@fticonsulting.com.

© Copyright 2015. The views expressed in this article are those of the author and not necessarily those of FTI Consulting, Inc., or its other professionals.
More Info

Share this page

Published