Archive for opinion polls

New Polling Agency

Posted in Bad Statistics with tags , , on August 10, 2018 by telescoper

There is a new polling agency on the block, called DeltaPoll.

I had never heard of them until last week, when they had a strange poll published in the Daily Mail (which, obviously, I’m not going to link to).

I think we need new pollsters like we need a hole in the head. These companies are forever misrepresenting the accuracy of their surveys and they confuse more than they inform. I was intrigued, however, so I looked up their Twitter profile and found this:

They don’t have a big Twitter following, but the names behind it have previously been associated with other polling agencies, so perhaps it’s not as dodgy as I assumed.

On the other hand, what on Earth does ’emotional and mathematical measurement methods’ mean?


Polls Apart

Posted in Bad Statistics, Politics with tags , , , , , , , on May 9, 2017 by telescoper

Time for some random thoughts about political opinion polls, the light of Sunday’s French Presidential Election result.

We all know that Emmanuel Macron beat Marine Le Pen in the second round ballot: he won 66.1% of the votes cast to Le Pen’s 33.9%. That doesn’t count the very large number of spoilt ballots or abstentions (25.8% in total). The turnout was down on previous elections, but at 74.2% it’s still a lot higher than we can expect in the UK at the forthcoming General Election.

The French opinion polls were very accurate in predicting the first round results, getting the percentage results for the four top candidates within a percentage or two which is as good as it gets for typical survey sizes.

Nate Silver Harry Enten has written a post on Nate Silver’s FiveThirtyEight site claiming that the French opinion polls for the second round “runoff” were inaccurate. He bases this on the observation that the “average poll” in between the two rounds of voting gave Macron a lead of about 22% (61%-39%). That’s true, but it assumes that opinions did not shift in the latter stages of the campaign. In particular it ignores Marine Le Pen’s terrible performance in the one-on-one TV debate against Macron on 4th May. Polls conducted after that show (especially a big one with a sample of 5331 by IPSOS) gave a figure more like 63-37, i.e. a 26 point lead.

In any case it can be a bit misleading to focus on the difference between the two vote shares. In a two-horse race, if you’re off by +3 for one candidate you will be off by -3 for the other. In other words, underestimating Macron’s vote automatically means over-estimating Le Pen’s. A ‘normal’ sampling error looks twice as bad if you frame it in terms of differences like this.  The last polls giving Macron at 63% are only off by 3%, which is a normal sampling error…

The polls were off by more than they have been in previous years (where they have typically predicted the spread within 4%. There’s also the question of how the big gap between the two candidates may have influenced voter behaviour,  increasing the number of no-shows.

So I don’t think the French opinion polls did as badly as all that. What still worries me, though, is the different polls consistently gave results that agreed with the others to within 1% or so, when there really should be sampling fluctuations. Fishy.

By way of a contrast, consider a couple of recent opinion polls conducted by YouGov in Wales. The first, conducted in April, gave the following breakdown of likely votes:


The apparent ten-point lead for the Conservatives over Labour (which is traditionally dominant in Wales) created a lot of noise in the media as it showed the Tories up 12% on the previous such poll taken in January (and Labour down 3%); much of the Conservative increase was due to a collapse in the UKIP share. Here’s the long-term picture from YouGov:


As an aside I’ll mention that ‘barometer’ surveys like this are sometimes influenced by changes in weightings and other methodological factors that can artificially produce different outcomes. I don’t know if anything changed in this regard between January 2017 and May 2017 that might have contributed to the large swing to the Tories, so let’s just assume that it’s “real”.

This “sensational” result gave various  pundits (e.g. Cardiff’s own Roger Scully) the opportunity to construct various narratives about the various implications for the forthcoming General Election.

Note, however, the sample sample size (1029), which implies an uncertainty of ±3% or so in the result. It came as no surprise to me, then, to see that the next poll by YouGov was a bit different: Conservatives on 41% (+1), but Labour on 35% (+5). That’s still grim for Labour, of course, but not quite as grim as being 10 points behind.

So what happened in the two weeks between these two polls? Well, one thing is that many places had local elections which resulted in lots of campaigning. In my ward, at least, that made a big difference: Labour increased its share of the vote compared to the 2012 elections (on a 45% turnout, which is high for local elections). Maybe then it’s true that Labour has been “fighting back” since the end of April.

Alternatively, and to my mind more probably, what we’re seeing is just the consequence of very large sampling errors. I think it’s likely that the Conservatives are in the lead, but by an extremely uncertain margin.

But why didn’t we see fluctuations of this magnitude in the French opinion polls of similar size?

Answers on a postcard, or through the comments box, please.

Politics, Polls and Insignificance

Posted in Bad Statistics, Politics with tags , , , , , on July 29, 2014 by telescoper

In between various tasks I had a look at the news and saw a story about opinion polls that encouraged me to make another quick contribution to my bad statistics folder.

The piece concerned (in the Independent) includes the following statement:

A ComRes survey for The Independent shows that the Conservatives have dropped to 27 per cent, their lowest in a poll for this newspaper since the 2010 election. The party is down three points on last month, while Labour, now on 33 per cent, is up one point. Ukip is down one point to 17 per cent, with the Liberal Democrats up one point to eight per cent and the Green Party up two points to seven per cent.

The link added to ComRes is mine; the full survey can be found here. Unfortunately, the report, as is sadly almost always the case in surveys of this kind, neglects any mention of the statistical uncertainty in the poll. In fact the last point is based on a telephone poll of a sample of just 1001 respondents. Suppose the fraction of the population having the intention to vote for a particular party is p. For a sample of size n with x respondents indicating that they hen one can straightforwardly estimate p \simeq x/n. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample, which for a telephone poll is doubtful.

A  little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of p in terms of the sampling error:

\sigma = \sqrt{\frac{p(1-p)}{n}}

For the sample size given, and a value p \simeq 0.33 this amounts to a standard error of about 1.5%. About 95% of samples drawn from a population in which the true fraction is p will yield an estimate within p \pm 2\sigma, i.e. within about 3% of the true figure. In other words the typical variation between two samples drawn from the same underlying population is about 3%.

If you don’t believe my calculation then you could use ComRes’ own “margin of error calculator“. The UK electorate as of 2012 numbered 46,353,900 and a sample size of 1001 returns a margin of error of 3.1%. This figure is not quoted in the report however.

Looking at the figures quoted in the report will tell you that all of the changes reported since last month’s poll are within the sampling uncertainty and are therefore consistent with no change at all in underlying voting intentions over this period.

A summary of the report posted elsewhere states:

A ComRes survey for the Independent shows that Labour have jumped one point to 33 per cent in opinion ratings, with the Conservatives dropping to 27 per cent – their lowest support since the 2010 election.

No! There’s no evidence of support for Labour having “jumped one point”, even if you could describe such a marginal change as a “jump” in the first place.

Statistical illiteracy is as widespread amongst politicians as it is amongst journalists, but the fact that silly reports like this are commonplace doesn’t make them any less annoying. After all, the idea of sampling uncertainty isn’t all that difficult to understand. Is it?

And with so many more important things going on in the world that deserve better press coverage than they are getting, why does a “quality” newspaper waste its valuable column inches on this sort of twaddle?

General Purpose Election Blog Post

Posted in Bad Statistics, Politics with tags , , on April 14, 2010 by telescoper

A dramatic new <insert name of polling organization, e.g. GALLUP> opinion poll has revealed that the <insert name of political party> lead over <insert name of political party> has WIDENED/SHRUNK/NOT CHANGED dramatically. This almost certainly means a <insert name of political party> victory or a hung parliament. This contrasts with a recent <insert name of polling organization, e.g. YOUGOV> poll which showed that the <insert name of political party> lead had WIDENED/SHRUNK/NOT CHANGED which almost certainly meant a <insert name of political party> victory or a hung parliament.

Political observers were quick to point out that we shouldn’t read too much into this poll, as tomorrow’s <insert name of polling organization> poll shows the <insert name of political party> lead over <insert name of political party> has WIDENED/SHRUNK/NOT CHANGED dramatically, almost certainly meaning a <insert name of political party> victory or a hung parliament.

(adapted, without permission, from Private Eye)