Archive for Jon Butterworth

Frequentism: the art of probably answering the wrong question

Posted in Bad Statistics with tags , , , , , , on September 15, 2014 by telescoper

Popped into the office for a spot of lunch in between induction events and discovered that Jon Butterworth has posted an item on his Grauniad blog about how particle physicists use statistics, and the ‘5σ rule’ that is usually employed as a criterion for the detection of, e.g. a new particle. I couldn’t resist bashing out a quick reply, because I believe that actually the fundamental issue is not whether you choose 3σ or 5σ or 27σ but what these statistics mean or don’t mean.

As was the case with a Nature piece I blogged about some time ago, Jon’s article focuses on the p-value, a frequentist concept that corresponds to the probability of obtaining a value at least as large as that obtained for a test statistic under a particular null hypothesis. To give an example, the null hypothesis might be that two variates are uncorrelated; the test statistic might be the sample correlation coefficient r obtained from a set of bivariate data. If the data were uncorrelated then r would have a known probability distribution, and if the value measured from the sample were such that its numerical value would be exceeded with a probability of 0.05 then the p-value (or significance level) is 0.05. This is usually called a ‘2σ’ result because for Gaussian statistics a variable has a probability of 95% of lying within 2σ of the mean value.

Anyway, whatever the null hypothesis happens to be, you can see that the way a frequentist would proceed would be to calculate what the distribution of measurements would be if it were true. If the actual measurement is deemed to be unlikely (say that it is so high that only 1% of measurements would turn out that big under the null hypothesis) then you reject the null, in this case with a “level of significance” of 1%. If you don’t reject it then you tacitly accept it unless and until another experiment does persuade you to shift your allegiance.

But the p-value merely specifies the probability that you would reject the null-hypothesis if it were correct. This is what you would call making a Type I error. It says nothing at all about the probability that the null hypothesis is actually a correct description of the data. To make that sort of statement you would need to specify an alternative distribution, calculate the distribution based on it, and hence determine the statistical power of the test, i.e. the probability that you would actually reject the null hypothesis when it is correct. To fail to reject the null hypothesis when it’s actually incorrect is to make a Type II error.

If all this stuff about p-values, significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. It’s so bizarre, in fact, that I think most people who quote p-values have absolutely no idea what they really mean. Jon’s piece demonstrates that he does, so this is not meant as a personal criticism, but it is a pervasive problem that results quoted in such a way are intrinsically confusing.

The Nature story mentioned above argues that in fact that results quoted with a p-value of 0.05 turn out to be wrong about 25% of the time. There are a number of reasons why this could be the case, including that the p-value is being calculated incorrectly, perhaps because some assumption or other turns out not to be true; a widespread example is assuming that the variates concerned are normally distributed. Unquestioning application of off-the-shelf statistical methods in inappropriate situations is a serious problem in many disciplines, but is particularly prevalent in the social sciences when samples are typically rather small.

While I agree with the Nature piece that there’s a problem, I don’t agree with the suggestion that it can be solved simply by choosing stricter criteria, i.e. a p-value of 0.005 rather than 0.05 or, in the case of particle physics, a 5σ standard (which translates to about 0.000001!  While it is true that this would throw out a lot of flaky ‘two-sigma’ results, it doesn’t alter the basic problem which is that the frequentist approach to hypothesis testing is intrinsically confusing compared to the logically clearer Bayesian approach. In particular, most of the time the p-value is an answer to a question which is quite different from that which a scientist would actually want to ask, which is what the data have to say about the probability of a specific hypothesis being true or sometimes whether the data imply one hypothesis more strongly than another. I’ve banged on about Bayesian methods quite enough on this blog so I won’t repeat the arguments here, except that such approaches focus on the probability of a hypothesis being right given the data, rather than on properties that the data might have given the hypothesis.

I feel so strongly about this that if I had my way I’d ban p-values altogether…

Not that it’s always easy to implement a Bayesian approach. It’s especially difficult when the data are affected by complicated noise statistics and selection effects, and/or when it is difficult to formulate a hypothesis test rigorously because one does not have a clear alternative hypothesis in mind. Experimentalists (including experimental particle physicists) seem to prefer to accept the limitations of the frequentist approach than tackle the admittedly very challenging problems of going Bayesian. In fact in my experience it seems that those scientists who approach data from a theoretical perspective are almost exclusively Baysian, while those of an experimental or observational bent stick to their frequentist guns.

Coincidentally a paper on the arXiv not long ago discussed an interesting apparent paradox in hypothesis testing that arises in the context of high energy physics, which I thought I’d share here. Here is the abstract:

The Jeffreys-Lindley paradox displays how the use of a p-value (or number of standard deviations z) in a frequentist hypothesis test can lead to inferences that are radically different from those of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930’s and common today. The setting is the test of a point null (such as the Standard Model of elementary particle physics) versus a composite alternative (such as the Standard Model plus a new force of nature with unknown strength). The p-value, as well as the ratio of the likelihood under the null to the maximized likelihood under the alternative, can both strongly disfavor the null, while the Bayesian posterior probability for the null can be arbitrarily large. The professional statistics literature has many impassioned comments on the paradox, yet there is no consensus either on its relevance to scientific communication or on the correct resolution. I believe that the paradox is quite relevant to frontier research in high energy physics, where the model assumptions can evidently be quite different from those in other sciences. This paper is an attempt to explain the situation to both physicists and statisticians, in hopes that further progress can be made.

This paradox isn’t a paradox at all; the different approaches give different answers because they ask different questions. Both could be right, but I firmly believe that one of them answers the wrong question.

Astronomy (and Particle Physics) Look-alikes, No. 92

Posted in Astronomy Lookalikes with tags , , on April 24, 2014 by telescoper

Although it’s not strictly an astronomical observation, I am struck by the resemblance between the distinguished particle physicist and blogger Professor Alfred E. Neuman, of University College London, and the iconic cover boy of Mad Magazine, Jon Butterworth. This could explain a lot about the Large Hadron Collider.

PP_Lookalike

Spoof Positive?

Posted in Science Politics, Uncategorized with tags , on June 29, 2010 by telescoper

Only time for a brief post today, as I’m shortly off to London for some external examining in the East End.

It’s interesting that yesterday’s #SpoofJenks day generated so many contributions that the Guardian decided to get the main instigator, particle physicist Jon Butterworth, to write about it on their Science Blog.  My own contribution of yesterday gets a mention there too.

I have to say I found the whole thing very amusing and wholeheartedly agree with Jon Butterworth (whose original spoof started it all off) when he explained that his primary aim was more to let off steam and less to try to persuade Simon Jenkins of the error of his ways. I felt the same way. Better to poke fun back than allow him to get to you.

I didn’t feel parody was necessary in Simon Jenkins’ case because his arguments are full of factual errors and non sequitur. In fact, it did occur to me that his piece might be deliberately ironic. Could one of the prime movers behind the Millennium Dome really be serious when he talks about the wastefulness of CERN? Perhaps he’s spoofed us all. But even that wouldn’t excuse his snide personal attack on Martin Rees.

Anyway, as you will have noticed, I  just went for straight mockery and had a good half-an-hour of lunchtime fun writing it.  A few people seem to have liked my piece, but at least one blogger found it “unpleasant”. You can’t win ‘em all. For what it’s worth, I still think he deserved it.

In fact, I posted a considerably less vitriolic reaction to Jenkins article on Saturday, but trying to respond in rational terms was something I found very frustrating. Only a few hundred people read this blog so it’s pretty futile to try to take on a columnist from a national newspaper that’s read by hundreds of thousands. I’m not sure he’s listening anyway, as he’s written similar drivel countless times before. Far better, in my view, to join the collective piss-taking. At least it got the Guardian interested.

Maybe after all this Jenkins will actually engage in a dialogue with scientists instead of merely insulting them? Perhaps. But I’m not holding my breath.

Follow

Get every new post delivered to your Inbox.

Join 3,589 other followers