There is a danger that political polls, like economic forecasts, are taken as fact. In this blog, Peter Kellner, former president of YouGov explores their limitations and goes on to explain why these are precisely what makes them so valuable.
      
    
    
      Adlai Stevenson got it right. The Democratic presidential candidate 
who lost twice to Dwight Eisenhower in the 1950s once said: “Polls 
should be taken but not inhaled”.
Pollsters, like economic forecasters, suffer from the habit of 
journalists, politicians and others to judge them as simply “right” or 
“wrong”, rather than the best efforts of (mainly) intelligent people to 
tell us what they think is going on. They are usually somewhere near the
 truth but seldom precisely accurate – and often mocked for errors that 
are actually quite modest. Imagine a guest arriving two minutes late at a
 restaurant and being scolded as fiercely as if they were two hours 
late.
That, though, is only the start of our problems. Here are five factors that intelligent pollwatchers should take into account.
First, pundits predict; polls don’t. Polls provide 
our best estimate of the state of public opinion at the time they were 
conducted. We can never be completely certain what voters will think in a
 month or a year’s time. The one partial exception to this is the final 
estimate the polls produce on the eve of a general election, when voters
 have little time to change their mind. But even then, polls could be 
caught out by differential turnout – the supporters of one party being 
more determined to vote than the supporters of another.
Second, polls outside an election campaign differ on what they are actually measuring.
 In the aftermath of Kwasi Kwarteng’s mini budget, two respected 
companies produced sharply different figures for Labour’s lead: 33%, 
said YouGov; 19% said Opinium. Much of the difference can be explained 
by the fact that the two companies were answering two questions that 
sound alike but are in fact different:
“What would be the figures for each party if we managed to speak to 
every elector in Great Britain and added up the numbers supporting each 
party?”
Or:
“If nobody switched parties between now and the next general election, what would the result be?”
YouGov’s polls seek to answer the first question, while Opinium seeks
 to answer the second. They are different because of the way they deal 
with people who say they currently don’t know how they would vote. 
YouGov’s data indicates that of the 14 million people who voted 
Conservative election, five million now don’t know how they would vote. 
They are omitted from the voting intention figures. Opinium assumes that
 many of them will in the end vote, and that most of them will return to
 the Tory fold. It also draws on past election data which shows 
consistently that slightly more Labour than Conservative supporters end 
up staying at home. If YouGov had adopted Opinium’s approach, it would 
have cut Labour’s lead by around ten points. Which method you prefer is a
 matter of judgement about how best to treat those pesky “don’t knows”.
That issue connects to a wider third point. Answering a mid-term poll is different from deciding a general election vote.
 A worried Conservative could tell a pollster that they would vote 
Labour or Liberal Democrat in order to give vent to their frustrations, 
knowing that they will not wake up next morning to a change of 
government. By-elections are often similar: big anti-government swings 
in mid term send a message without handing power to the opposition. This
 is why one of the most common patterns in polls for more than 60 years 
is for support for the governing party to slump in mid-term but recover 
as the election approaches. It was even true in the run-up to Labour’s 
landslide in 1997. Two years beforehand, the Tories were 40 points 
behind (sometimes more). They lost the popular vote in the election by 
13 points – a lot, but far less than 40.
A pundit might well say now that the Conservatives will recover to 
some extent between now and the next election. I, for one, expect this. 
But, as those financial ads warn, past record is no guarantee of  future
 performance.
Fourth, polls, however intelligently designed, are prone to systematic errors.
 Long gone are the days when most people would happily give their views 
to pollsters in the street or on the phone. Response rates for both have
 collapsed, making them increasingly expensive. Most political polls 
these days are conducted online.  (YouGov pioneered this in Britain 22 
years ago; other companies have now switched to online research.)
However, only a minority of electors belong to online panels, which 
generate the email addresses for pollsters to contact – and by no means 
all of them agree to take part in political polls. This means that 
pollsters must take the answers from the people they can reach and 
deduce the views of those they can’t reach. They are in the business of 
modelling public opinion, not simply measuring it. They use various 
methods to select and weight their samples in order to reflect the 
electorate as a whole – by age, gender, education, social class, region,
 past vote and so on. But it is always possible that, at any given time,
 some other factor influences the way people vote, and it turns out that
 the sampling and weighting system has not allowed for this.
Moreover, when one party is out of favour, its supporters might be 
under-recorded, either because they are more reluctant to take part in 
polls at all, or to admit their loyalty. This might be one reason why so
 many people who voted Conservative in 2019 now say “don’t know”.
Fifth, and lastly, even the best poll, getting its modelling right, is prone to straightforward sampling deviation.
 By convention, polls refer to the 95% confidence limit (for 
statisticians, the two-sigma deviation level). That is, the range which 
should be accurate 19 times out of 20. In a normal political poll, 
reporting the voting intentions of around 1500 people, its figures for 
each of the main parties should be within 2.5 percentage points of its 
true support (assuming there are no other sources of error). However, 
that figure should be qualified. Two-thirds of the time, the margin for 
error is half that – 1 ¼ points – but one poll in 20 will be outside 
that range. It will be a rogue poll not because the pollsters are no 
good, but because they are unlucky.
In September, the month that saw Liz Truss becoming Prime Minister, 
her first, and now former, Chancellor delivering his mini Budget, the 
financial markets going haywire and Tory support tanking, I counted 38 
published voting intention polls. Statistically, two of them were 
probably rogue polls. Which two? Good question, but forgive me if I 
leave you to check them out and make up your own mind.
NIER
      
      
      
      
        © NIESR
     
      
      
      
      
      
      Key
      
 Hover over the blue highlighted
        text to view the acronym meaning
      

Hover
        over these icons for more information
      
      
     
    
    
      
      Comments:
      
      No Comments for this Article