## A Bump at the Large Hadron Collider

Posted in Bad Statistics, The Universe and Stuff with tags , , , on December 16, 2015 by telescoper

Very busy, so just a quickie today. Yesterday the good folk at the Large Hadron Collider announced their latest batch of results. You can find the complete set from the CMS experiment here and from ATLAS here.

The result that everyone is talking about is shown in the following graph, which shows the number of diphoton events as a function of energy:

Attention is focussing on the apparent “bump” at around 750 GeV; you can find an expert summary by a proper particle physicist here and another one here.

It is claimed that the “significance level” of this “detection” is 3.6σ. I won’t comment on that precise statement partly because it depends on the background signal being well understood but mainly because I don’t think this is the right language in which to express such a result in the first place. Experimental particle physicists do seem to be averse to doing proper Bayesian analyses of their data.

However if you take the claim in the way such things are usually presented it is roughly equivalent to a statement that the odds against this being a real detection are greater that 6000:1. If any particle physicists out there are willing to wager £6000 for £1 of mine that this result will be confirmed by future measurements then I’d happily take them up on that bet!

P.S. Entirely predictably there are 10 theory papers on today’s ArXiv offering explanations of the alleged bump, none of which says that it’s a noise feature..

## The Nobel Prize for Neutrino Oscillations

Posted in The Universe and Stuff with tags , , , , , , , , on October 6, 2015 by telescoper

Well the Nobel Prize for Physics in 2015 has been announced. It has been awarded jointly to Takaaki Kajita and Arthur B. McDonald for..

the discovery of neutrino oscillations, which prove that neutrinos have mass.

You can read the full citation here. Congratulations to them both. Some physicists around here were caught by surprise because the 2002 Nobel Prize was also awarded for neutrino physics, but it is fair because this award goes for a direct measurement of neutrino oscillations, which is an important breakthrough in its own right; the earlier award was for measurements of solar neutrinos. For a nice description of the background you could do worse than the Grauniad blog post by Jon Butterworth about neutrino physics.

In brief the a process in which neutrinos (which have three distinct flavour states, associated with the electron, mu and tau leptons) can change flavour as they propagate. It’s quite a weird thing to spring on students who previously thought that lepton number (which denotes the flavour) was always conserved. I remember years ago having to explain this phenomenon to third-year students taking my particle physics course.  I decided to start with an analogy based on more familiar physics, but it didn’t go to plan.

A charged fermion such as an electron (or in fact anything that has a magnetic moment, which would include, e.g. the neutron)  has spin and, according to standard quantum mechanics, the component of this in any direction can  can be described in terms of two basis states, say $|\uparrow>$ and $|\downarrow>$ for spin in the $z$ direction. In general, however, the spin state will be a superposition of these, e.g.

$\frac{1}{\sqrt{2}} \left( |\uparrow> + |\downarrow>\right)$

In this example, as long as the particle is travelling through empty space, the probability of finding it with spin “up” is  50%, as is the probability of finding it in the spin “down” state. Once a measurement is made, the state collapses into a definite “up” or “down” wherein it remains until something else is done to it.

If, on the other hand, the particle  is travelling through a region where there is a  magnetic field the “spin-up” and “spin-down” states can acquire different energies owing to the interaction between the spin and the magnetic field. This is important because it means the bits of the wave function describing the up and down states evolve at different rates, and this  has measurable consequences: measurements made at different positions yield different probabilities of finding the spin pointing in different directions. In effect, the spin vector of the  particle performs  a sort of oscillation, similar to the classical phenomenon called  precession.

The mathematical description of neutrino oscillations is very similar to this, except it’s not the spin part of the wavefunction being affected by an external field that breaks the symmetry between “up” and “down”. Instead the flavour part of the wavefunction is “precessing” because the flavour states don’t coincide with the eigenstates of the Hamiltonian that describes the neutrinos’ evolution. However, it does require that different neutrino types have intrinsically different energies  in quite  a similar way similar to the spin-precession example. In the context of neutrinos however the difference in energy means a difference in mass, and if there’s a difference in mass then not all flavours of neutrino can be massless.

Although the analogy I used isn’t a perfect, I thought  it was a good way of getting across the basic idea. Unfortunately, however, when I subsequently asked an examination question about neutrino oscillations I got a significant number of answers that said “neutrino oscillations happen when a neutrino travels through a magnetic field….”. Sigh. Neutrinos don’t interact with  magnetic fields, you see…

Anyway, today’s announcment also prompts me to mention that neutrino physics is one of the main research interests in our Experimental Particle Physics group here at Sussex. You can read a recent post here about an important milestone in the development of the NOvA Experiment which involves several members of the Department of Physics and Astronomy in the School of Mathematical and Physical Sciences here at the University of Sussex. Here’s the University of Sussex’s press release on the subject. In fact Art McDonald is a current collaborator of our neutrino physicists, who have been celebrating his award today!

Neutrino physics is a fascinating subject even to someone like me, who isn’t really a particle physicist. My impression of the field is that was fairly moribund until about the turn of the millennium  when the first measurement of atmospheric neutrino oscillations was announced. All of a sudden there was evidence that neutrinos can’t all be massless (as many of us had long assumed, at least as far as lecturing was concerned).  Now the humble neutrino is the subject of intense experimental activity, not only in the USA and UK but all around the world in a way that would have been difficult to predict twenty years ago.

But then, as the physicist Niels Bohr famously observed, “Prediction is very difficult. Especially about the future.”

## An Open Letter to the Times Higher World University Rankers

Posted in Education, The Universe and Stuff with tags , , , , , , , , on October 5, 2015 by telescoper

Dear Rankers,

Having perused your latest set of league tables along with the published methodology, a couple of things puzzle me.

First, I note that you have made significant changes to your methodology for combining metrics this year. How, then, can you justify making statements such as

US continues to lose its grip as institutions in Europe up their game

when it appears that any changes could well be explained not by changes in performance, as gauged by the metrics you use,  but in the way they are combined?

I assume, as intelligent and responsible people, that you did the obvious test for this effect, i.e. to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators.  Your failure to publish such a set, to illustrate how seriously your readers should take statements such as that quoted above, must then simply have been an oversight. Had you deliberately witheld evidence of the unreliability of your conclusions you would have left yourselves open to an accusation of gross dishonesty, which I am sure would be unfair.

Happily, however, there is a very easy way to allay the fears of the global university community that the world rankings are being manipulated: all you need to do is publish a set of league tables using the 2014 methodology and the 2015 data. Any difference between this table and the one you published would then simply be an artefact and the new ranking can be ignored. I’m sure you are as anxious as anyone else to prove that the changes this year are not simply artificially-induced “churn”, and I look forward to seeing the results of this straightforward calculation published in the Times Higher as soon as possible.

Second, I notice that one of the changes to your methodology is explained thus

This year we have removed the very small number of papers (649) with more than 1,000 authors from the citations indicator.

You are presumably aware that this primarily affects papers relating to experimental particle physics, which is mostly conducted through large international collaborations (chiefly, but not exclusively, based at CERN). This change at a stroke renders such fundamental scientific breakthroughs as the discovery of the Higgs Boson completely worthless. This is a strange thing to do because this is exactly the type of research that inspires  prospective students to study physics, as well as being direct measures in themselves of the global standing of a University.

My current institution, the University of Sussex, is heavily involved in experiments at CERN. For example, Dr Iacopo Vivarelli has just been appointed coordinator of all supersymmetry searches using the ATLAS experiment on the Large Hadron Collider. This involvement demonstrates the international standing of our excellent Experimental Particle Physics group, but if evidence of supersymmetry is found at the LHC your methodology will simply ignore it. A similar fate will also befall any experiment that requires large international collaborations: searches for dark matter, dark energy, and gravitational waves to name but three, all exciting and inspiring scientific adventures that you regard as unworthy of any recognition at all but which draw students in large numbers into participating departments.

Your decision to downgrade collaborative research to zero is not only strange but also extremely dangerous, for it tells university managers that participating in world-leading collaborative research will jeopardise their rankings. How can you justify such a deliberate and premeditated attack on collaborative science? Surely it is exactly the sort of thing you should be rewarding? Physics departments not participating in such research are the ones that should be downgraded!

Your answer might be that excluding “superpapers” only damages the rankings of smaller universities because might owe a larger fraction of their total citation count to collaborative work. Well, so what if this is true? It’s not a reason for excluding them. Perhaps small universities are better anyway, especially when they emphasize small group teaching and provide opportunities for students to engage in learning that’s led by cutting-edge research. Or perhaps you have decided otherwise and have changed your methodology to confirm your prejudice…

I look forward to seeing your answers to the above questions through the comments box or elsewhere – though you have ignored my several attempts to raise these questions via social media. I also look forward to seeing you correct your error of omission by demonstrating – by the means described above – what  changes in league table positions are by your design rather than any change in performance. If it turns out that the former is the case, as I think it will, at least your own journal provides you with a platform from which you can apologize to the global academic community for wasting their time.

Yours sincerely,

Telescoper

## “Credit” needn’t mean “Authorship”

Posted in Science Politics, The Universe and Stuff with tags , , , on September 4, 2015 by telescoper

I’ve already posted about the absurdity of scientific papers with ridiculously long author lists but this issue has recently come alive again with the revelation that the compilers of the Times Higher World University Rankings decided to exclude such papers entirely from their analysis of citation statistics.

Large collaborations involving not only scientists but engineers, instrument builders, computer programmers and data analysts –  are the norm in some fields of science – especially (but not exclusively) experimental particle physics – so the arbitrary decision to omit such works from bibliometric analysis is not only idiotic but also potentially damaging to a number of disciplines. The “logic” behind this decision is that papers with “freakish” author lists might distort analyses of citation impact, even allowing – heaven forbid – small institutions with a strong involvement in world-leading studies such as those associated with the Large Hadron Collider to do well compared with larger institutions that are not involved in such collaborations.  If what you do doesn’t fit comfortably within a narrow and simplistic method of evaluating research, then it must be excluded even if it is the best in the world. A sensible person would realise that if the method doesn’t give proper credit then you need a better method, but the bean counters at the Times Higher have decided to give no credit at all to research conducted in this way. The consequences of putting the bibliometric cart in front of the scientific horse could be disastrous, as insitutions find their involvement in international collaborations dragging them down the league tables. I despair of the obsession with league tables because these rankings involve trying to shoehorn a huge amount of complicated information into a single figure of merit. This is not only pointless, but could also drive behaviours that are destructive to entire disciplines.

That said, there is no denying that particle physicists, cosmology and other disciplines that operate through large teams must share part of the blame. Those involved in these collaborations have achieved brilliant successes through the imagination and resourcefulness of the people involved. Where imagination has failed however is to carry on insisting that the only way to give credit to members of a consortium is by making them all authors of scientific papers. In the example I blogged about a few months ago this blinkered approach generated a paper with more than 5000 authors; of the 33 pages in the article, no fewer than 24 were taken up with the list of authors.

Papers just don’t have five thousand “authors”. I even suspect that only about 1% of these “authors” have even read the paper. That doesn’t mean that the other 99% didn’t do immensely valuable work. It does mean that pretending that they participated in writing the article that describes their work isn’t be the right way to acknowledge their contribution. How are young scientists supposed to carve out a reputation if their name is always buried in immensely long author lists? The very system that attempts to give them credit renders that credit worthless. Instead of looking at publication lists, appointment panels have to rely on reference letters instead and that means early career researchers have to rely on the power of patronage.

As science evolves it is extremely important that the methods for disseminating scientific results evolve too. The trouble is that they aren’t. We remain obsessed with archaic modes of publication, partly because of innate conservatism and partly because the lucrative publishing industry benefits from the status quo. The system is clearly broken, but the scientific community carries on regardless. When there are so many brilliant minds engaged in this sort of research, why are so few willing to challenge an orthodoxy that has long outlived its usefulness. Change is needed, not to make life simpler for the compilers of league tables, but for the sake of science itself.

I’m not sure what is to be done, but it’s an urgent problem which looks set to develop very rapidly into an emergency. One idea appears in a paper on the arXiv with the abstract:

Science and engineering research increasingly relies on activities that facilitate research but are not currently rewarded or recognized, such as: data sharing; developing common data resources, software and methodologies; and annotating data and publications. To promote and advance these activities, we must develop mechanisms for assigning credit, facilitate the appropriate attribution of research outcomes, devise incentives for activities that facilitate research, and allocate funds to maximize return on investment. In this article, we focus on addressing the issue of assigning credit for both direct and indirect contributions, specifically by using JSON-LD to implement a prototype transitive credit system.

I strongly recommend this piece. I don’t think it offers a complete solution, but certainly contains  many interesting ideas. For the situation to improve, however, we have to accept that there is a problem. As things stand, far too many senior scientists are in denial. This has to change.

## The Curious Case of the 3.5 keV “Line” in Cluster Spectra

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , on July 22, 2015 by telescoper

Earlier this week I went to a seminar. That’s a rare enough event these days given all the other things I have to do. The talk concerned was by Katie Mack, who was visiting the Astronomy Centre and it contained a nice review of the general situation regarding the constraints on astrophysical dark matter from direct and indirect detection experiments. I’m not an expert on experiments – I’m banned from most laboratories on safety grounds – so it was nice to get a review from someone who knows what they’re talking about.

One of the pieces of evidence discussed in the talk was something I’ve never really looked at in detail myself, namely the claimed evidence of an  emission “line” in the spectrum of X-rays emitted by the hot gas in galaxy clusters. I put the word “line” in inverted commas for reasons which will soon become obvious. The primary reference for the claim is a paper by Bulbul et al which is, of course, freely available on the arXiv.

The key graph from that paper is this:

The claimed feature – it stretches the imagination considerably to call it a “line” – is shown in red. No, I’m not particularly impressed either, but this is what passes for high-quality data in X-ray astronomy!

There’s a nice review of this from about a year ago here which says this feature

is very significant, at 4-5 astrophysical sigma.

I’m not sure how to convert astrophysical sigma into actual sigma, but then I don’t really like sigma anyway. A proper Bayesian model comparison is really needed here. If it is a real feature then a plausible explanation is that it is produced by the decay of some sort of dark matter particle in a manner that involves the radiation of an energetic photon. An example is the decay of a massive sterile neutrino – a hypothetical particle that does not participate in weak interactions –  into a lighter standard model neutrino and a photon, as discussed here. In this scenario the parent particle would have a mass of about 7keV so that the resulting photon has an energy of half that. Such a particle would constitute warm dark matter.

On the other hand, that all depends on you being convinced that there is anything there at all other than a combination of noise and systematics. I urge you to read the paper and decide. Then perhaps you can try to persuade me, because I’m not at all sure. The X-ray spectrum of hot gas does have a number of known emission features in it that needed to be subtracted before any anomalous emission can be isolated. I will remark however that there is a known recombination line of Argon that lies at 3.6 keV, and you have to be convinced that this has been subtracted correctly if the red bump is to be interpreted as something extra. Also note that all the spectra that show this feature are obtained using the same instrument – on the XMM/Newton spacecraft which makes it harder to eliminate the possibility that it is an instrumental artefact.

I’d be interested in comments from X-ray folk about how confident we should be that the 3.5 keV “anomaly” is real…

## The Latest TV – Experimental Particle Physics at Sussex

Posted in Brighton, The Universe and Stuff with tags , , , , on June 10, 2015 by telescoper

I just came across this clip featuring our own Prof. Antonella de Santo of the Department of Physics & Astronomy at the University of Sussex (where she leads the Experimental Particle Physics group) talking about the group’s work on The Latest TV, a new documentary TV station based in Brighton.

## Big Science is not the Problem – it’s Top-Down Management of Research

Posted in Finance, Science Politics, The Universe and Stuff with tags , , , , on June 2, 2015 by telescoper

I’m very late to this because I was away at the weekend, but I couldn’t resist making a comment on a piece that appeared in the Grauniad last week entitled How can we stop big science hovering up all the research funding? That piece argues for a new system of allocating research funding to avoid all the available cash being swallowed by a few big projects. This is an argument that’s been rehearsed many times before in the context of physics and astronomy, the costs of the UK contribution to facilities such as CERN (home of the Large Hadron Collider) and the European Southern Observatory being major parts of the budget of the Science and Technology Facilities Council that often threaten to squeeze the funds available for “exploiting” these facilities – in other words for doing science. What’s different about the Guardian article however is that it focusses on genomics, which has only recently threatened to become a Big Science.

Anyway, Jon Butterworth has responded with a nice piece of his own (also in the Guardian) with which I agree quite strongly. I would however like to make a couple of comments.

First of all, I think there are two different usages of the phrase “Big Science” and we should be careful not to conflate them. The first, which particularly applies in astronomy and particle physics, is that the only way to do research in these subjects is with enormous and generally very expensive pieces of kit. For this reason, and in order to share the cost in a reasonable manner, these fields tend to be dominated by large international collaborations. While it is indeed true that the Large Hadron Collider has cost a lot of money, that money has been spent by a large number of countries over a very long time. Moreover, particle physicists argued for that way of working and collectively made it a reality. The same thing happens in astronomy: the next generation of large telescopes are all transnational affairs.

The other side of the “Big Science” coin is quite a different thing. It relates to attempts to impose a top-down organization on science when that has nothing to do with the needs of the scientific research. In other words, making scientists in big research centres when it doesn’t need to be done like that. Here I am much more sceptical of the value. All the evidence from, e.g., the Research Excellence Framework is that there is a huge amount of top-class research going on in small groups here and there, much of it extremely innovative and imaginative. It’s very hard to justify concentrating everything in huge centres that are only Big because they’ve taken killed everything that’s Small, by concentrating resources to satisfy some management fixation rather than based on the quality of the research being done. I have seen far too many attempts by funding councils, especially the Engineering and Physical Sciences Research Council, to direct funding from the top down which, in most cases, is simply not the best way to deliver compelling science. Directed programmes rarely deliver exciting science, partly because the people directing them are not the people who actually know most about the field.

I am a fan of the first kind of Big Science, and not only for scientific reasons. I like the way it encourages us to think beyond the petty limitations of national politics, which is something that humanity desparately needs to get used to. But while Big Science can be good, forcing other science to work in Big institutes won’t necessarily make it better. In fact it could have the opposite effect, stifling the innovative approaches so often found in small groups. Small can be beautiful too.

Finally, I’d have to say that I found the Guardian article that started this piece of to be a bit mean-spirited. Scientists should be standing together not just to defend but to advance scientific research across all the disciplines rather than trying to set different kinds of researchers against each other. I feel the same way about funding the arts, actually. I’m all for more science funding, but don’t want to see the arts to be killed off to pay for it.

## STFC Consolidated Grants Review

Posted in Finance, Science Politics with tags , , , , , , , , on October 28, 2014 by telescoper

It’s been quite a while since I last put my community service hat on while writing a blog post, but here’s an opportunity. Last week the Science and Technology Facilities Council (STFC) published a Review of the Implementation of Consolidated Grants, which can be found in its entirety here (PDF). I encourage all concerned to read it.

Once upon a time I served on the Astronomy Grants Panel whose job it was to make recommendations on funding for Astronomy through the Consolidated Grant Scheme, though this review covers the implementation across the entire STFC remit, including Nuclear Physics, Particle Physics (Theory), Particle Physics (Experiment) and Astronomy (which includes solar-terrestrial physics and space science). It’s quite interesting to see differences in how the scheme has been implemented across these various disciplines, but I’ll just include here a couple of comments on the Astronomy side of things.

First, here is a table showing the number of academic staff for whom support was requested over the three years for which the consolidated grant system has been in existence (2011, 2012 and 2013 for rounds 1, 2 and 3 respectively).  You can see that the overall success rate was slightly better in round 3, possibly due to applicants learning more about the process over the cycle, but otherwise the outcomes seem reasonably consistent:

The last three rows of this table  on the other hand show quite clearly the impact of the “flat cash” settlement for STFC science funding on Postdoctoral Research Assistant (PDRA) support:

Constant cash means ongoing cuts in real terms; there were 11.6% fewer Astronomy PDRAs supported in 2013 than in 2011. Job prospects for the next generation of astronomers continue to dwindle…

Any other comments, either on these tables or on the report as a whole, are welcome through the comments box.

## Neutrini via NOVA

Posted in The Universe and Stuff with tags , , , , on October 9, 2014 by telescoper

There’s been quite a lot of discussion at this meeting so far about neutrino physics (and indeed neutrino astrophysics) which, I suppose, is not surprising given the proximity of my current location, the city of L’Aquila, to the Gran Sasso Laboratory which is situated inside a mountain a few kilometres away. If I were being tactless I could at this point mention the infamous “fast-than-light-neutrino” episode that emanated from here a while ago, but obviously I won’t do that.

Anyway, I thought I’d take the opportunity to put up this video which describes how neutrinos are detected at the NOVA experiment on which some of my colleagues in the Department of Physics & Astronomy at the University of Sussex work and which is now up and running. If you want to know how to detect particles so elusive that they can pass right through the Earth without being absorbed, then watch this:

## Frequentism: the art of probably answering the wrong question

Posted in Bad Statistics with tags , , , , , , on September 15, 2014 by telescoper

Popped into the office for a spot of lunch in between induction events and discovered that Jon Butterworth has posted an item on his Grauniad blog about how particle physicists use statistics, and the ‘5σ rule’ that is usually employed as a criterion for the detection of, e.g. a new particle. I couldn’t resist bashing out a quick reply, because I believe that actually the fundamental issue is not whether you choose 3σ or 5σ or 27σ but what these statistics mean or don’t mean.

As was the case with a Nature piece I blogged about some time ago, Jon’s article focuses on the p-value, a frequentist concept that corresponds to the probability of obtaining a value at least as large as that obtained for a test statistic under a particular null hypothesis. To give an example, the null hypothesis might be that two variates are uncorrelated; the test statistic might be the sample correlation coefficient r obtained from a set of bivariate data. If the data were uncorrelated then r would have a known probability distribution, and if the value measured from the sample were such that its numerical value would be exceeded with a probability of 0.05 then the p-value (or significance level) is 0.05. This is usually called a ‘2σ’ result because for Gaussian statistics a variable has a probability of 95% of lying within 2σ of the mean value.

Anyway, whatever the null hypothesis happens to be, you can see that the way a frequentist would proceed would be to calculate what the distribution of measurements would be if it were true. If the actual measurement is deemed to be unlikely (say that it is so high that only 1% of measurements would turn out that large under the null hypothesis) then you reject the null, in this case with a “level of significance” of 1%. If you don’t reject it then you tacitly accept it unless and until another experiment does persuade you to shift your allegiance.

But the p-value merely specifies the probability that you would reject the null-hypothesis if it were correct. This is what you would call making a Type I error. It says nothing at all about the probability that the null hypothesis is actually a correct description of the data. To make that sort of statement you would need to specify an alternative distribution, calculate the distribution based on it, and hence determine the statistical power of the test, i.e. the probability that you would actually reject the null hypothesis when it is incorrect. To fail to reject the null hypothesis when it’s actually incorrect is to make a Type II error.

If all this stuff about p-values, significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. It’s so bizarre, in fact, that I think most people who quote p-values have absolutely no idea what they really mean. Jon’s piece demonstrates that he does, so this is not meant as a personal criticism, but it is a pervasive problem that results quoted in such a way are intrinsically confusing.

The Nature story mentioned above argues that in fact that results quoted with a p-value of 0.05 turn out to be wrong about 25% of the time. There are a number of reasons why this could be the case, including that the p-value is being calculated incorrectly, perhaps because some assumption or other turns out not to be true; a widespread example is assuming that the variates concerned are normally distributed. Unquestioning application of off-the-shelf statistical methods in inappropriate situations is a serious problem in many disciplines, but is particularly prevalent in the social sciences when samples are typically rather small.

While I agree with the Nature piece that there’s a problem, I don’t agree with the suggestion that it can be solved simply by choosing stricter criteria, i.e. a p-value of 0.005 rather than 0.05 or, in the case of particle physics, a 5σ standard (which translates to about 0.000001!  While it is true that this would throw out a lot of flaky ‘two-sigma’ results, it doesn’t alter the basic problem which is that the frequentist approach to hypothesis testing is intrinsically confusing compared to the logically clearer Bayesian approach. In particular, most of the time the p-value is an answer to a question which is quite different from that which a scientist would actually want to ask, which is what the data have to say about the probability of a specific hypothesis being true or sometimes whether the data imply one hypothesis more strongly than another. I’ve banged on about Bayesian methods quite enough on this blog so I won’t repeat the arguments here, except that such approaches focus on the probability of a hypothesis being right given the data, rather than on properties that the data might have given the hypothesis.