## The Insignificance of ORB

Posted in Bad Statistics with tags , , , on April 5, 2016 by telescoper

A piece about opinion polls ahead of the EU Referendum which appeared in today’s Daily Torygraph has spurred me on to make a quick contribution to my bad statistics folder.

The piece concerned includes the following statement:

David Cameron’s campaign to warn voters about the dangers of leaving the European Union is beginning to win the argument ahead of the referendum, a new Telegraph poll has found.

The exclusive poll found that the “Remain” campaign now has a narrow lead after trailing last month, in a sign that Downing Street’s tactic – which has been described as “Project Fear” by its critics – is working.

The piece goes on to explain

The poll finds that 51 per cent of voters now support Remain – an increase of 4 per cent from last month. Leave’s support has decreased five points to 44 per cent.

This conclusion is based on the results of a survey by ORB in which the number of participants was 800. Yes, eight hundred.

How much can we trust this result on statistical grounds?

Suppose the fraction of the population having the intention to vote in a particular way in the EU referendum is $p$. For a sample of size $n$ with $x$ respondents indicating that they hen one can straightforwardly estimate $p \simeq x/n$. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample which, given the fact that such polls have been all over the place seems rather unlikely.

A little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of $p$ in terms of the sampling error:

$\sigma = \sqrt{\frac{p(1-p)}{n}}$

For the sample size of 800 given, and an actual value $p \simeq 0.5$ this amounts to a standard error of about 2%. About 95% of samples drawn from a population in which the true fraction is $p$ will yield an estimate within $p \pm 2\sigma$, i.e. within about 4% of the true figure. In other words the typical variation between two samples drawn from the same underlying population is about 4%. In other other words, the change reported between the two ORB polls mentioned above can be entirely explained by sampling variation and does not at all imply any systematic change of public opinion between the two surveys.

I need hardly point out that in a two-horse race (between “Remain” and “Leave”) an increase of 4% in the Remain vote corresponds to a decrease in the Leave vote by the same 4% so a 50-50 population vote can easily generate a margin as large as  54-46 in such a small sample.

Why do pollsters bother with such tiny samples? With such a large margin error they are basically meaningless.

I object to the characterization of the Remain campaign as “Project Fear” in any case. I think it’s entirely sensible to point out the serious risks that an exit from the European Union would generate for the UK in loss of trade, science funding, financial instability, and indeed the near-inevitable secession of Scotland. But in any case this poll doesn’t indicate that anything is succeeding in changing anything other than statistical noise.

Statistical illiteracy is as widespread amongst politicians as it is amongst journalists, but the fact that silly reports like this are commonplace doesn’t make them any less annoying. After all, the idea of sampling uncertainty isn’t all that difficult to understand. Is it?

And with so many more important things going on in the world that deserve better press coverage than they are getting, why does a “quality” newspaper waste its valuable column inches on this sort of twaddle?

## More fine structure silliness …

Posted in Bad Statistics, The Universe and Stuff with tags , on March 17, 2016 by telescoper

Wondering what had happened to claims of a spatial variation of the fine-structure constant?

Well, they’re still around but there’s still very little convincing evidence to support them, as this post explains…

I noticed this paper by Pinho & Martins on astro ph today (accepted to Phys Lett B) concerning the alleged spatial variation of the fine structure constant; I say alleged but from reading this paper you’d think it was a settled debate with only the precise functional form of the final spatial model left to be decided.  In this latest instalment the authors propose to consider what updates can be made to the parameters of the spatial dipole model given 10 new quasar absorption datapoints (along 7 unique sight lines) drawn from post-Webb et al. studies published in the recent literature, with “the aim of ascertaining whether the evidence for the dipolar variation is preserved”.  Which, since they don’t consider the possibility of systematic errors* in the Webb et al. dataset, it is … since 10 data points with slightly lower standard measurement errors—and supposedly lower systematic errors—cannot trump the couple of hundred original measurements…

View original post 303 more words

## A Bump at the Large Hadron Collider

Posted in Bad Statistics, The Universe and Stuff with tags , , , on December 16, 2015 by telescoper

Very busy, so just a quickie today. Yesterday the good folk at the Large Hadron Collider announced their latest batch of results. You can find the complete set from the CMS experiment here and from ATLAS here.

The result that everyone is talking about is shown in the following graph, which shows the number of diphoton events as a function of energy:

Attention is focussing on the apparent “bump” at around 750 GeV; you can find an expert summary by a proper particle physicist here and another one here.

It is claimed that the “significance level” of this “detection” is 3.6σ. I won’t comment on that precise statement partly because it depends on the background signal being well understood but mainly because I don’t think this is the right language in which to express such a result in the first place. Experimental particle physicists do seem to be averse to doing proper Bayesian analyses of their data.

However if you take the claim in the way such things are usually presented it is roughly equivalent to a statement that the odds against this being a real detection are greater that 6000:1. If any particle physicists out there are willing to wager £6000 for £1 of mine that this result will be confirmed by future measurements then I’d happily take them up on that bet!

P.S. Entirely predictably there are 10 theory papers on today’s ArXiv offering explanations of the alleged bump, none of which says that it’s a noise feature..

## Gamma-Ray Bursts and the Cosmological Principle

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , on September 13, 2015 by telescoper

There’s been a reasonable degree of hype surrounding a paper published in Monthly Notices of the Royal Astronomical Society (and available on the arXiv here). The abstract of this paper reads:

According to the cosmological principle (CP), Universal large-scale structure is homogeneous and isotropic. The observable Universe, however, shows complex structures even on very large scales. The recent discoveries of structures significantly exceeding the transition scale of 370 Mpc pose a challenge to the CP. We report here the discovery of the largest regular formation in the observable Universe; a ring with a diameter of 1720 Mpc, displayed by 9 gamma-ray bursts (GRBs), exceeding by a factor of 5 the transition scale to the homogeneous and isotropic distribution. The ring has a major diameter of 43° and a minor diameter of 30° at a distance of 2770 Mpc in the 0.78 < z < 0.86 redshift range, with a probability of 2 × 10−6 of being the result of a random fluctuation in the GRB count rate. Evidence suggests that this feature is the projection of a shell on to the plane of the sky. Voids and string-like formations are common outcomes of large-scale structure. However, these structures have maximum sizes of 150 Mpc, which are an order of magnitude smaller than the observed GRB ring diameter. Evidence in support of the shell interpretation requires that temporal information of the transient GRBs be included in the analysis. This ring-shaped feature is large enough to contradict the CP. The physical mechanism responsible for causing it is unknown.

The so-called “ring” can be seen here:

In my opinion it’s not a ring at all, but an outline of Australia. What’s the probability of a random distribution of dots looking exactly like that? Is it really evidence for the violation of the Cosmological Principle, or for the existence of the Cosmic Antipodes?

For those of you who don’t get that gag, a cosmic antipode occurs in, e.g., closed Friedmann cosmologies in which the spatial sections take the form of a hypersphere (or 3-sphere). The antipode is the point diametrically opposite the observer on this hypersurface, just as it is for the surface of a 2-sphere such as the Earth. The antipode is only visible if it lies inside the observer’s horizon, a possibility which is ruled out for standard cosmologies by current observations. I’ll get my coat.

Anyway, joking apart, the claims in the abstract of the paper are extremely strong but the statistical arguments supporting them are deeply unconvincing. Indeed, I am quite surprised the paper passed peer review. For a start there’s a basic problem of “a posteriori” reasoning here. We see a group of objects that form a map of Australia ring and then are surprised that such a structure appears so rarely in simulations of our favourite model. But all specific configurations of points are rare in a Poisson point process. We would be surprised to see a group of dots in the shape of a pretzel too, or the face of Jesus, but that doesn’t mean that such an occurrence has any significance. It’s an extraordinarily difficult problem to put a meaningful measure on the space of geometrical configurations, and this paper doesn’t succeed in doing that.

For a further discussion of the tendency that people have to see patterns where none exist, take a look at this old post from which I’ve taken this figure which is generated by drawing points independently and uniformly randomly:

I can see all kinds of shapes in this pattern, but none of them has any significance (other than psychological). In a mathematically well-defined sense there is no structure in this pattern! Add to that difficulty the fact that so few points are involved and I think it becomes very clear that this “structure” doesn’t provide any evidence at all for the violation of the Cosmological Principle. Indeed it seems neither do the authors. The very last paragraph of the paper is as follows:

GRBs are very rare events superimposed on the cosmic
web identified by superclusters. Because of this, the ring is
probably not a real physical structure. Further studies are
needed to reveal whether or not the Ring could have been
produced by a low-frequency spatial harmonic of the large-
scale matter density distribution and/or of universal star
forming activity.

It’s a pity that this note of realism didn’t make it into either the abstract or, more importantly, the accompanying press release. Peer review will never be perfect, but we can do without this sort of hype. Anyway, I confidently predict that a proper refutation will appear shortly….

P.S. For a more technical discussion of the problems of inferring the presence of large structures from sparsely-sampled distributions, see here.

## Adventures with the One-Point Distribution Function

Posted in Bad Statistics, Books, Talks and Reviews, Talks and Reviews, The Universe and Stuff with tags , , on September 1, 2015 by telescoper

As I promised a few people, here are the slides I used for my talk earlier today at the meeting I am attending. Actually I was given only 30 minutes and used up a lot of that time on two things that haven’t got much to do with the title. One was a quiz to identify the six famous astronomers (or physicists) who had made important contributions to statistics (Slide 2) and the other was on some issues that arose during the discussion session yesterday evening. I didn’t in the end talk much about the topic given in the title, which was about how, despite learning a huge amount about certain aspects of galaxy clustering, we are still far from a good understanding of the one-point distribution of density fluctuations. I guess I’ll get the chance to talk more about that in the near future!

P.S. I think the six famous faces should be easy to identify, so there are no prizes but please feel free to guess through the comments box!

## Statistics in Astronomy

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , on August 29, 2015 by telescoper

A few people at the STFC Summer School for new PhD students in Cardiff last week asked if I could share the slides. I’ve given the Powerpoint presentation to the organizers so presumably they will make the presentation available, but I thought I’d include it here too. I’ve corrected a couple of glitches I introduced trying to do some last-minute hacking just before my talk!

As you will inferfrom the slides, I decided not to compress an entire course on statistical methods into a one-hour talk. Instead I tried to focus on basic principles, primarily to get across the importance of Bayesian methods for tackling the usual tasks of hypothesis testing and parameter estimation. The Bayesian framework offers the only mathematically consistent way of tackling such problems and should therefore be the preferred method of using data to test theories. Of course if you have data but no theory or a theory but no data, any method is going to struggle. And if you have neither data nor theory you’d be better off getting one of the other before trying to do anything. Failing that, you could always go down the pub.

Rather than just leave it at that I thought I’d append some stuff  I’ve written about previously on this blog, many years ago, about the interesting historical connections between Astronomy and Statistics.

Once the basics of mathematical probability had been worked out, it became possible to think about applying probabilistic notions to problems in natural philosophy. Not surprisingly, many of these problems were of astronomical origin but, on the way, the astronomers that tackled them also derived some of the basic concepts of statistical theory and practice. Statistics wasn’t just something that astronomers took off the shelf and used; they made fundamental contributions to the development of the subject itself.

The modern subject we now know as physics really began in the 16th and 17th century, although at that time it was usually called Natural Philosophy. The greatest early work in theoretical physics was undoubtedly Newton’s great Principia, published in 1687, which presented his idea of universal gravitation which, together with his famous three laws of motion, enabled him to account for the orbits of the planets around the Sun. But majestic though Newton’s achievements undoubtedly were, I think it is fair to say that the originator of modern physics was Galileo Galilei.

Galileo wasn’t as much of a mathematical genius as Newton, but he was highly imaginative, versatile and (very much unlike Newton) had an outgoing personality. He was also an able musician, fine artist and talented writer: in other words a true Renaissance man.  His fame as a scientist largely depends on discoveries he made with the telescope. In particular, in 1610 he observed the four largest satellites of Jupiter, the phases of Venus and sunspots. He immediately leapt to the conclusion that not everything in the sky could be orbiting the Earth and openly promoted the Copernican view that the Sun was at the centre of the solar system with the planets orbiting around it. The Catholic Church was resistant to these ideas. He was hauled up in front of the Inquisition and placed under house arrest. He died in the year Newton was born (1642).

These aspects of Galileo’s life are probably familiar to most readers, but hidden away among scientific manuscripts and notebooks is an important first step towards a systematic method of statistical data analysis. Galileo performed numerous experiments, though he certainly didn’t carry out the one with which he is most commonly credited. He did establish that the speed at which bodies fall is independent of their weight, not by dropping things off the leaning tower of Pisa but by rolling balls down inclined slopes. In the course of his numerous forays into experimental physics Galileo realised that however careful he was taking measurements, the simplicity of the equipment available to him left him with quite large uncertainties in some of the results. He was able to estimate the accuracy of his measurements using repeated trials and sometimes ended up with a situation in which some measurements had larger estimated errors than others. This is a common occurrence in many kinds of experiment to this day.

Very often the problem we have in front of us is to measure two variables in an experiment, say X and Y. It doesn’t really matter what these two things are, except that X is assumed to be something one can control or measure easily and Y is whatever it is the experiment is supposed to yield information about. In order to establish whether there is a relationship between X and Y one can imagine a series of experiments where X is systematically varied and the resulting Y measured.  The pairs of (X,Y) values can then be plotted on a graph like the example shown in the Figure.

In this example on it certainly looks like there is a straight line linking Y and X, but with small deviations above and below the line caused by the errors in measurement of Y. This. You could quite easily take a ruler and draw a line of “best fit” by eye through these measurements. I spent many a tedious afternoon in the physics labs doing this sort of thing when I was at school. Ideally, though, what one wants is some procedure for fitting a mathematical function to a set of data automatically, without requiring any subjective intervention or artistic skill. Galileo found a way to do this. Imagine you have a set of pairs of measurements (xi,yi) to which you would like to fit a straight line of the form y=mx+c. One way to do it is to find the line that minimizes some measure of the spread of the measured values around the theoretical line. The way Galileo did this was to work out the sum of the differences between the measured yi and the predicted values mx+c at the measured values x=xi. He used the absolute difference |yi-(mxi+c)| so that the resulting optimal line would, roughly speaking, have as many of the measured points above it as below it. This general idea is now part of the standard practice of data analysis, and as far as I am aware, Galileo was the first scientist to grapple with the problem of dealing properly with experimental error.

The method used by Galileo was not quite the best way to crack the puzzle, but he had it almost right. It was again an astronomer who provided the missing piece and gave us essentially the same method used by statisticians (and astronomy) today.

Karl Friedrich Gauss (left) was undoubtedly one of the greatest mathematicians of all time, so it might be objected that he wasn’t really an astronomer. Nevertheless he was director of the Observatory at Göttingen for most of his working life and was a keen observer and experimentalist. In 1809, he developed Galileo’s ideas into the method of least-squares, which is still used today for curve fitting.

This approach involves basically the same procedure but involves minimizing the sum of [yi-(mxi+c)]2 rather than |yi-(mxi+c)|. This leads to a much more elegant mathematical treatment of the resulting deviations – the “residuals”.  Gauss also did fundamental work on the mathematical theory of errors in general. The normal distribution is often called the Gaussian curve in his honour.

After Galileo, the development of statistics as a means of data analysis in natural philosophy was dominated by astronomers. I can’t possibly go systematically through all the significant contributors, but I think it is worth devoting a paragraph or two to a few famous names.

I’ve already written on this blog about Jakob Bernoulli, whose famous book on probability was (probably) written during the 1690s. But Jakob was just one member of an extraordinary Swiss family that produced at least 11 important figures in the history of mathematics.  Among them was Daniel Bernoulli who was born in 1700.  Along with the other members of his famous family, he had interests that ranged from astronomy to zoology. He is perhaps most famous for his work on fluid flows which forms the basis of much of modern hydrodynamics, especially Bernouilli’s principle, which accounts for changes in pressure as a gas or liquid flows along a pipe of varying width.
But the elder Jakob’s work on gambling clearly also had some effect on Daniel, as in 1735 the younger Bernoulli published an exceptionally clever study involving the application of probability theory to astronomy. It had been known for centuries that the orbits of the planets are confined to the same part in the sky as seen from Earth, a narrow band called the Zodiac. This is because the Earth and the planets orbit in approximately the same plane around the Sun. The Sun’s path in the sky as the Earth revolves also follows the Zodiac. We now know that the flattened shape of the Solar System holds clues to the processes by which it formed from a rotating cloud of cosmic debris that formed a disk from which the planets eventually condensed, but this idea was not well established in the time of Daniel Bernouilli. He set himself the challenge of figuring out what the chance was that the planets were orbiting in the same plane simply by chance, rather than because some physical processes confined them to the plane of a protoplanetary disk. His conclusion? The odds against the inclinations of the planetary orbits being aligned by chance were, well, astronomical.

The next “famous” figure I want to mention is not at all as famous as he should be. John Michell was a Cambridge graduate in divinity who became a village rector near Leeds. His most important idea was the suggestion he made in 1783 that sufficiently massive stars could generate such a strong gravitational pull that light would be unable to escape from them.  These objects are now known as black holes (although the name was coined much later by John Archibald Wheeler). In the context of this story, however, he deserves recognition for his use of a statistical argument that the number of close pairs of stars seen in the sky could not arise by chance. He argued that they had to be physically associated, not fortuitous alignments. Michell is therefore credited with the discovery of double stars (or binaries), although compelling observational confirmation had to wait until William Herschel’s work of 1803.

It is impossible to overestimate the importance of the role played by Pierre Simon, Marquis de Laplace in the development of statistical theory. His book A Philosophical Essay on Probabilities, which began as an introduction to a much longer and more mathematical work, is probably the first time that a complete framework for the calculation and interpretation of probabilities ever appeared in print. First published in 1814, it is astonishingly modern in outlook.

Laplace began his scientific career as an assistant to Antoine Laurent Lavoiser, one of the founding fathers of chemistry. Laplace’s most important work was in astronomy, specifically in celestial mechanics, which involves explaining the motions of the heavenly bodies using the mathematical theory of dynamics. In 1796 he proposed the theory that the planets were formed from a rotating disk of gas and dust, which is in accord with the earlier assertion by Daniel Bernouilli that the planetary orbits could not be randomly oriented. In 1776 Laplace had also figured out a way of determining the average inclination of the planetary orbits.

A clutch of astronomers, including Laplace, also played important roles in the establishment of the Gaussian or normal distribution.  I have also mentioned Gauss’s own part in this story, but other famous astronomers played their part. The importance of the Gaussian distribution owes a great deal to a mathematical property called the Central Limit Theorem: the distribution of the sum of a large number of independent variables tends to have the Gaussian form. Laplace in 1810 proved a special case of this theorem, and Gauss himself also discussed it at length.

A general proof of the Central Limit Theorem was finally furnished in 1838 by another astronomer, Friedrich Wilhelm Bessel– best known to physicists for the functions named after him – who in the same year was also the first man to measure a star’s distance using the method of parallax. Finally, the name “normal” distribution was coined in 1850 by another astronomer, John Herschel, son of William Herschel.

I hope this gets the message across that the histories of statistics and astronomy are very much linked. Aspiring young astronomers are often dismayed when they enter research by the fact that they need to do a lot of statistical things. I’ve often complained that physics and astronomy education at universities usually includes almost nothing about statistics, because that is the one thing you can guarantee to use as a researcher in practically any branch of the subject.

Over the years, statistics has become regarded as slightly disreputable by many physicists, perhaps echoing Rutherford’s comment along the lines of “If your experiment needs statistics, you ought to have done a better experiment”. That’s a silly statement anyway because all experiments have some form of error that must be treated statistically, but it is particularly inapplicable to astronomy which is not experimental but observational. Astronomers need to do statistics, and we owe it to the memory of all the great scientists I mentioned above to do our statistics properly.

## A Question of Entropy

Posted in Bad Statistics with tags , , on August 10, 2015 by telescoper

We haven’t had a poll for a while so here’s one for your entertainment.

An article has appeared on the BBC Website entitled Web’s random numbers are too weak, warn researchers. The piece is about the techniques used to encrypt data on the internet. It’s a confusing piece, largely because of the use of the word “random” which is tricky to define; see a number of previous posts on this topic. I’ll steer clear of going over that issue again. However, there is a paragraph in the article that talks about entropy:

An unshuffled pack of cards has a low entropy, said Mr Potter, because there is little surprising or uncertain about the order the cards would be dealt. The more a pack was shuffled, he said, the more entropy it had because it got harder to be sure about which card would be turned over next.

I won’t prejudice your vote by saying what I think about this statement, but here’s a poll so I can try to see what you think.

Of course I also welcome comments via the box below…