Archive for Particle Physics

STFC Consolidated Grants Review

Posted in Finance, Science Politics with tags , , , , , , , , on October 28, 2014 by telescoper

It’s been quite a while since I last put my community service hat on while writing a blog post, but here’s an opportunity. Last week the Science and Technology Facilities Council (STFC) published a Review of the Implementation of Consolidated Grants, which can be found in its entirety here (PDF). I encourage all concerned to read it.

Once upon a time I served on the Astronomy Grants Panel whose job it was to make recommendations on funding for Astronomy through the Consolidated Grant Scheme, though this review covers the implementation across the entire STFC remit, including Nuclear Physics, Particle Physics (Theory), Particle Physics (Experiment) and Astronomy (which includes solar-terrestrial physics and space science). It’s quite interesting to see differences in how the scheme has been implemented across these various disciplines, but I’ll just include here a couple of comments on the Astronomy side of things.

First, here is a table showing the number of academic staff for whom support was requested over the three years for which the consolidated grant system has been in existence (2011, 2012 and 2013 for rounds 1, 2 and 3 respectively).  You can see that the overall success rate was slightly better in round 3, possibly due to applicants learning more about the process over the cycle, but otherwise the outcomes seem reasonably consistent:

STFC_Con1

The last three rows of this table  on the other hand show quite clearly the impact of the “flat cash” settlement for STFC science funding on Postdoctoral Research Assistant (PDRA) support:
STFC_Con

Constant cash means ongoing cuts in real terms; there were 11.6% fewer Astronomy PDRAs supported in 2013 than in 2011. Job prospects for the next generation of astronomers continue to dwindle…

Any other comments, either on these tables or on the report as a whole, are welcome through the comments box.

 

Neutrini via NOVA

Posted in The Universe and Stuff with tags , , , , on October 9, 2014 by telescoper

There’s been quite a lot of discussion at this meeting so far about neutrino physics (and indeed neutrino astrophysics) which, I suppose, is not surprising given the proximity of my current location, the city of L’Aquila, to the Gran Sasso Laboratory which is situated inside a mountain a few kilometres away. If I were being tactless I could at this point mention the infamous “fast-than-light-neutrino” episode that emanated from here a while ago, but obviously I won’t do that.

Anyway, I thought I’d take the opportunity to put up this video which describes how neutrinos are detected at the NOVA experiment on which some of my colleagues in the Department of Physics & Astronomy at the University of Sussex work and which is now up and running. If you want to know how to detect particles so elusive that they can pass right through the Earth without being absorbed, then watch this:

Frequentism: the art of probably answering the wrong question

Posted in Bad Statistics with tags , , , , , , on September 15, 2014 by telescoper

Popped into the office for a spot of lunch in between induction events and discovered that Jon Butterworth has posted an item on his Grauniad blog about how particle physicists use statistics, and the ‘5σ rule’ that is usually employed as a criterion for the detection of, e.g. a new particle. I couldn’t resist bashing out a quick reply, because I believe that actually the fundamental issue is not whether you choose 3σ or 5σ or 27σ but what these statistics mean or don’t mean.

As was the case with a Nature piece I blogged about some time ago, Jon’s article focuses on the p-value, a frequentist concept that corresponds to the probability of obtaining a value at least as large as that obtained for a test statistic under a particular null hypothesis. To give an example, the null hypothesis might be that two variates are uncorrelated; the test statistic might be the sample correlation coefficient r obtained from a set of bivariate data. If the data were uncorrelated then r would have a known probability distribution, and if the value measured from the sample were such that its numerical value would be exceeded with a probability of 0.05 then the p-value (or significance level) is 0.05. This is usually called a ‘2σ’ result because for Gaussian statistics a variable has a probability of 95% of lying within 2σ of the mean value.

Anyway, whatever the null hypothesis happens to be, you can see that the way a frequentist would proceed would be to calculate what the distribution of measurements would be if it were true. If the actual measurement is deemed to be unlikely (say that it is so high that only 1% of measurements would turn out that large under the null hypothesis) then you reject the null, in this case with a “level of significance” of 1%. If you don’t reject it then you tacitly accept it unless and until another experiment does persuade you to shift your allegiance.

But the p-value merely specifies the probability that you would reject the null-hypothesis if it were correct. This is what you would call making a Type I error. It says nothing at all about the probability that the null hypothesis is actually a correct description of the data. To make that sort of statement you would need to specify an alternative distribution, calculate the distribution based on it, and hence determine the statistical power of the test, i.e. the probability that you would actually reject the null hypothesis when it is incorrect. To fail to reject the null hypothesis when it’s actually incorrect is to make a Type II error.

If all this stuff about p-values, significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. It’s so bizarre, in fact, that I think most people who quote p-values have absolutely no idea what they really mean. Jon’s piece demonstrates that he does, so this is not meant as a personal criticism, but it is a pervasive problem that results quoted in such a way are intrinsically confusing.

The Nature story mentioned above argues that in fact that results quoted with a p-value of 0.05 turn out to be wrong about 25% of the time. There are a number of reasons why this could be the case, including that the p-value is being calculated incorrectly, perhaps because some assumption or other turns out not to be true; a widespread example is assuming that the variates concerned are normally distributed. Unquestioning application of off-the-shelf statistical methods in inappropriate situations is a serious problem in many disciplines, but is particularly prevalent in the social sciences when samples are typically rather small.

While I agree with the Nature piece that there’s a problem, I don’t agree with the suggestion that it can be solved simply by choosing stricter criteria, i.e. a p-value of 0.005 rather than 0.05 or, in the case of particle physics, a 5σ standard (which translates to about 0.000001!  While it is true that this would throw out a lot of flaky ‘two-sigma’ results, it doesn’t alter the basic problem which is that the frequentist approach to hypothesis testing is intrinsically confusing compared to the logically clearer Bayesian approach. In particular, most of the time the p-value is an answer to a question which is quite different from that which a scientist would actually want to ask, which is what the data have to say about the probability of a specific hypothesis being true or sometimes whether the data imply one hypothesis more strongly than another. I’ve banged on about Bayesian methods quite enough on this blog so I won’t repeat the arguments here, except that such approaches focus on the probability of a hypothesis being right given the data, rather than on properties that the data might have given the hypothesis.

I feel so strongly about this that if I had my way I’d ban p-values altogether…

Not that it’s always easy to implement a Bayesian approach. It’s especially difficult when the data are affected by complicated noise statistics and selection effects, and/or when it is difficult to formulate a hypothesis test rigorously because one does not have a clear alternative hypothesis in mind. Experimentalists (including experimental particle physicists) seem to prefer to accept the limitations of the frequentist approach than tackle the admittedly very challenging problems of going Bayesian. In fact in my experience it seems that those scientists who approach data from a theoretical perspective are almost exclusively Baysian, while those of an experimental or observational bent stick to their frequentist guns.

Coincidentally a paper on the arXiv not long ago discussed an interesting apparent paradox in hypothesis testing that arises in the context of high energy physics, which I thought I’d share here. Here is the abstract:

The Jeffreys-Lindley paradox displays how the use of a p-value (or number of standard deviations z) in a frequentist hypothesis test can lead to inferences that are radically different from those of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930’s and common today. The setting is the test of a point null (such as the Standard Model of elementary particle physics) versus a composite alternative (such as the Standard Model plus a new force of nature with unknown strength). The p-value, as well as the ratio of the likelihood under the null to the maximized likelihood under the alternative, can both strongly disfavor the null, while the Bayesian posterior probability for the null can be arbitrarily large. The professional statistics literature has many impassioned comments on the paradox, yet there is no consensus either on its relevance to scientific communication or on the correct resolution. I believe that the paradox is quite relevant to frontier research in high energy physics, where the model assumptions can evidently be quite different from those in other sciences. This paper is an attempt to explain the situation to both physicists and statisticians, in hopes that further progress can be made.

This paradox isn’t a paradox at all; the different approaches give different answers because they ask different questions. Both could be right, but I firmly believe that one of them answers the wrong question.

Knit your own Neutralino

Posted in The Universe and Stuff with tags , , , , on June 21, 2014 by telescoper

I thought I’d give you a sneak preview of something soon to feature at the forthcoming Royal Society Summer Science Exhibition. With input from particle physicists from the Department of Physics & Astronomy at the University of Sussex, the inestimable Dorothy Lamb has designed a “Knit your own Neutralino” pack, which contains a knitting pattern and embellishments (wool not included), that can be used to construct a plushie representing the lightest neutralino, χ01, a candidate for the dark matter that pervades the Universe.

IMG_2540IMG_2541

Here are some examples, as produced by Dorothy herself:

IMG_2536

Here are some more elaborate variations, representing (I think) different types of chargino.

IMG_2539

Whatever they are, they’re a lot of fun and in my opinion more than a little bit camp!

I think we should introduce knitting as part of the “transferable skills” element of our physics courses. If we did, Dorothy would definitely graduate with first class honours!

Ode to SnarXiv

Posted in The Universe and Stuff with tags , , , , on April 30, 2014 by telescoper

So many things pass me by these days that I’m not usually surprised when I have no idea what people around me are talking about. I am however quite surprised that, until yesterday, never heard of the snarXiv. As its author explains:

The snarXiv is a ran­dom high-energy the­ory paper gen­er­a­tor incor­po­rat­ing all the lat­est trends, entropic rea­son­ing, and excit­ing mod­uli spaces. The arXiv is sim­i­lar, but occa­sion­ally less ran­dom.

The snarXiv uses “Context Free Grammar” together with a database of stock words and phrases to generate its content, which is actually just limited to titles and abstracts rather than entire papers. It’s just a matter of time, though. The results are variable, with some making no sense at all even by the standards of theoretical particle physics, but the best are almost good enough to pass off as real abstracts.

Here’s an example in the form of the abstract of a paper called (P,q) Brane Probe Predicted From Conformal Blocks:

Recently, work on new inflation has opened up a perturbative class of braneworld matrix models. We make contact with observables, moreover investigating trivial Beckenstein-Boltzmann equations. Next, using the behavior of a left-right reduction of models of WIMPs, we reformulate instanton liquids at the LHC. After discussing positrons, we check that worldsheet symmetric central charges are equivalent to electric-duality in gravity. Finally, we make contact with a special lagrangian brane, surprisingly obtaining models of inertial fluctuations.

Why not have a go at arXiv versus SnarXiv to see if you can spot the genuine article titles?

I’m tempted, with a nod in the light of the Sokal Affair, to suggest that a similar approach used in the social sciences, but the thing that really struck me is that someone should do a snarXiv for astronomy and astrophysics. Or is someone going to tell me it already exists?

Come to think of it, judging by some of the proposals I’ve read while serving on the Astronomy Grants Panel over the years, a similar generator may already exist for writing grant applications…

NOvA and Neutrinos

Posted in The Universe and Stuff with tags , , , , on March 11, 2014 by telescoper

Yesterday’s Grauniad blog post by Jon Butterworth about neutrino physics reminded me that I forgot to post about an important milestone in the development of the NOvA Experiment which involves several members of the Department of Physics and Astronomy in the School of Mathematical and Physical Sciences here at the University of Sussex. Here’s the University of Sussex’s press release on the subject, which came out a couple of weeks ago.

The NOvA experiment consists of two enormous  particle detectors, one at the Fermi National Accelerator Laboratory “Fermilab” near Chicago and the other in Minnesota. The neutrinos are actually generated  at Fermilab; the particle beam is then aimed  at the detectors the, one near the source at Fermilab, and the other in Ash River, Minnesota, near the Canadian border. The particles, sent in their billions every couple of seconds, complete the 500-mile trip in less than three milliseconds.

The point is that the experiment has managed for the first time to actually detect neutrinos through the 500 miles of rock separating the two ends of the experiment. This is obviously just a first step, but it’s equally obviously a crucial one.

Colleagues from Sussex University are strongly involved in  calibrating and fine-tuning the detector, which produces light when particles pass through it. Dr Abbey Waldron and PhD student Luke Vinton have developed a calibration procedure that uses known properties of  muons to calibrate precise measurements of the neutrinos, which are less well understood.  The detector sees 200,000 particle interactions a second, produced by cosmic rays bombarding the atmosphere, and scientists can’t record every single one. Sussex’s Dr Matthew Tamsett has developed a trigger algorithm that searches for events that look like neutrinos among the billions of other particle interactions.

Neutrino physics is an interesting subject to someone like me, who isn’t really a particle physicist. My impression of the field is that was fairly moribund until 1998 when the first measurement of atmospheric neutrino oscillations was announced. All of a sudden there was evidence that neutrinos can’t all be massless (as many of us had long assumed, at least as far as lecturing was concerned).  Now the humble neutrino is the subject of intense experimental activity, not only in the USA and UK but all around the world in a way that would have been difficult to predict twenty years ago.

But then, as the physicist Niels Bohr famously observed, “Prediction is very difficult. Especially about the future.”

Is Inflation Testable?

Posted in The Universe and Stuff with tags , , , , , , , , on March 4, 2014 by telescoper

It seems the little poll about cosmic inflation I posted last week with humorous intent has ruffled a few feathers, but at least it gives me the excuse to wheel out an updated and edited version of an old piece I wrote on the subject.

Just over thirty  years ago a young physicist came up with what seemed at first to be an absurd idea: that, for a brief moment in the very distant past, just after the Big Bang, something weird happened to gravity that made it push rather than pull.  During this time the Universe went through an ultra-short episode of ultra-fast expansion. The physicist in question, Alan Guth, couldn’t prove that this “inflation” had happened nor could he suggest a compelling physical reason why it should, but the idea seemed nevertheless to solve several major problems in cosmology.

Three decades later, Guth is a professor at MIT and inflation is now well established as an essential component of the standard model of cosmology. But should it be? After all, we still don’t know what caused it and there is little direct evidence that it actually took place. Data from probes of the cosmic microwave background seem to be consistent with the idea that inflation happened, but how confident can we be that it is really a part of the Universe’s history?

According to the Big Bang theory, the Universe was born in a dense fireball which has been expanding and cooling for about 14 billion years. The basic elements of this theory have been in place for over eighty years, but it is only in the last decade or so that a detailed model has been constructed which fits most of the available observations with reasonable precision. The problem is that the Big Bang model is seriously incomplete. The fact that we do not understand the nature of the dark matter and dark energy that appears to fill the Universe is a serious shortcoming. Even worse, we have no way at all of describing the very beginning of the Universe, which appears in the equations used by cosmologists as a “singularity”- a point of infinite density that defies any sensible theoretical calculation. We have no way to define a priori the initial conditions that determine the subsequent evolution of the Big Bang, so we have to try to infer from observations, rather than deduce by theory, the parameters that govern it.

The establishment of the new standard model (known in the trade as the “concordance” cosmology) is now allowing astrophysicists to turn back the clock in order to understand the very early stages of the Universe’s history and hopefully to understand the answer to the ultimate question of what happened at the Big Bang itself and thus answer the question “How did the Universe Begin?”

Paradoxically, it is observations on the largest scales accessible to technology that provide the best clues about the earliest stages of cosmic evolution. In effect, the Universe acts like a microscope: primordial structures smaller than atoms are blown up to astronomical scales by the expansion of the Universe. This also allows particle physicists to use cosmological observations to probe structures too small to be resolved in laboratory experiments.

Our ability to reconstruct the history of our Universe, or at least to attempt this feat, depends on the fact that light travels with a finite speed. The further away we see a light source, the further back in time its light was emitted. We can now observe light from stars in distant galaxies emitted when the Universe was less than one-sixth of its current size. In fact we can see even further back than this using microwave radiation rather than optical light. Our Universe is bathed in a faint glow of microwaves produced when it was about one-thousandth of its current size and had a temperature of thousands of degrees, rather than the chilly three degrees above absolute zero that characterizes the present-day Universe. The existence of this cosmic background radiation is one of the key pieces of evidence in favour of the Big Bang model; it was first detected in 1964 by Arno Penzias and Robert Wilson who subsequently won the Nobel Prize for their discovery.

The process by which the standard cosmological model was assembled has been a gradual one, but the latest step was taken by the European Space Agency’s Planck mission . I’ve blogged about the implications of the Planck results for cosmic inflation in more technical detail here. In a nutshell, for several years this satellite mapped  the properties of the cosmic microwave background and how it varies across the sky. Small variations in the temperature of the sky result from sound waves excited in the hot plasma of the primordial fireball. These have characteristic properties that allow us to probe the early Universe in much the same way that solar astronomers use observations of the surface of the Sun to understand its inner structure,  a technique known as helioseismology. The detection of the primaeval sound waves is one of the triumphs of modern cosmology, not least because their amplitude tells us precisely how loud the Big Bang really was.

The pattern of fluctuations in the cosmic radiation also allows us to probe one of the exciting predictions of Einstein’s general theory of relativity: that space should be curved by the presence of matter or energy. Measurements from Planck and its predecessor WMAP reveal that our Universe is very special: it has very little curvature, and so has a very finely balanced energy budget: the positive energy of the expansion almost exactly cancels the negative energy relating of gravitational attraction. The Universe is (very nearly) flat.

The observed geometry of the Universe provides a strong piece of evidence that there is an mysterious and overwhelming preponderance of dark stuff in our Universe. We can’t see this dark matter and dark energy directly, but we know it must be there because we know the overall budget is balanced. If only economics were as simple as physics.

Computer Simulation of the Cosmic Web

The concordance cosmology has been constructed not only from observations of the cosmic microwave background, but also using hints supplied by observations of distant supernovae and by the so-called “cosmic web” – the pattern seen in the large-scale distribution of galaxies which appears to match the properties calculated from computer simulations like the one shown above, courtesy of Volker Springel. The picture that has emerged to account for these disparate clues is consistent with the idea that the Universe is dominated by a blend of dark energy and dark matter, and in which the early stages of cosmic evolution involved an episode of accelerated expansion called inflation.

A quarter of a century ago, our understanding of the state of the Universe was much less precise than today’s concordance cosmology. In those days it was a domain in which theoretical speculation dominated over measurement and observation. Available technology simply wasn’t up to the task of performing large-scale galaxy surveys or detecting slight ripples in the cosmic microwave background. The lack of stringent experimental constraints made cosmology a theorists’ paradise in which many imaginative and esoteric ideas blossomed. Not all of these survived to be included in the concordance model, but inflation proved to be one of the hardiest (and indeed most beautiful) flowers in the cosmological garden.

Although some of the concepts involved had been formulated in the 1970s by Alexei Starobinsky, it was Alan Guth who in 1981 produced the paper in which the inflationary Universe picture first crystallized. At this time cosmologists didn’t know that the Universe was as flat as we now think it to be, but it was still a puzzle to understand why it was even anywhere near flat. There was no particular reason why the Universe should not be extremely curved. After all, the great theoretical breakthrough of Einstein’s general theory of relativity was the realization that space could be curved. Wasn’t it a bit strange that after all the effort needed to establish the connection between energy and curvature, our Universe decided to be flat? Of all the possible initial conditions for the Universe, isn’t this very improbable? As well as being nearly flat, our Universe is also astonishingly smooth. Although it contains galaxies that cluster into immense chains over a hundred million light years long, on scales of billions of light years it is almost featureless. This also seems surprising. Why is the celestial tablecloth so immaculately ironed?

Guth grappled with these questions and realized that they could be resolved rather elegantly if only the force of gravity could be persuaded to change its sign for a very short time just after the Big Bang. If gravity could push rather than pull, then the expansion of the Universe could speed up rather than slow down. Then the Universe could inflate by an enormous factor (1060 or more) in next to no time and, even if it were initially curved and wrinkled, all memory of this messy starting configuration would be lost. Our present-day Universe would be very flat and very smooth no matter how it had started out.

But how could this bizarre period of anti-gravity be realized? Guth hit upon a simple physical mechanism by which inflation might just work in practice. It relied on the fact that in the extreme conditions pertaining just after the Big Bang, matter does not behave according to the classical laws describing gases and liquids but instead must be described by quantum field theory. The simplest type of quantum field is called a scalar field; such objects are associated with particles that have no spin. Modern particle theory involves many scalar fields which are not observed in low-energy interactions, but which may well dominate affairs at the extreme energies of the primordial fireball.

Classical fluids can undergo what is called a phase transition if they are heated or cooled. Water for example, exists in the form of steam at high temperature but it condenses into a liquid as it cools. A similar thing happens with scalar fields: their configuration is expected to change as the Universe expands and cools. Phase transitions do not happen instantaneously, however, and sometimes the substance involved gets trapped in an uncomfortable state in between where it was and where it wants to be. Guth realized that if a scalar field got stuck in such a “false” state, energy – in a form known as vacuum energy – could become available to drive the Universe into accelerated expansion.We don’t know which scalar field of the many that may exist theoretically is responsible for generating inflation, but whatever it is, it is now dubbed the inflaton.

This mechanism is an echo of a much earlier idea introduced to the world of cosmology by Albert Einstein in 1916. He didn’t use the term vacuum energy; he called it a cosmological constant. He also didn’t imagine that it arose from quantum fields but considered it to be a modification of the law of gravity. Nevertheless, Einstein’s cosmological constant idea was incorporated by Willem de Sitter into a theoretical model of an accelerating Universe. This is essentially the same mathematics that is used in modern inflationary cosmology.  The connection between scalar fields and the cosmological constant may also eventually explain why our Universe seems to be accelerating now, but that would require a scalar field with a much lower effective energy scale than that required to drive inflation. Perhaps dark energy is some kind of shadow of the inflaton

Guth wasn’t the sole creator of inflation. Andy Albrecht and Paul Steinhardt, Andrei Linde, Alexei Starobinsky, and many others, produced different and, in some cases, more compelling variations on the basic theme. It was almost as if it was an idea whose time had come. Suddenly inflation was an indispensable part of cosmological theory. Literally hundreds of versions of it appeared in the leading scientific journals: old inflation, new inflation, chaotic inflation, extended inflation, and so on. Out of this activity came the realization that a phase transition as such wasn’t really necessary, all that mattered was that the field should find itself in a configuration where the vacuum energy dominated. It was also realized that other theories not involving scalar fields could behave as if they did. Modified gravity theories or theories with extra space-time dimensions provide ways of mimicking scalar fields with rather different physics. And if inflation could work with one scalar field, why not have inflation with two or more? The only problem was that there wasn’t a shred of evidence that inflation had actually happened.

This episode provides a fascinating glimpse into the historical and sociological development of cosmology in the eighties and nineties. Inflation is undoubtedly a beautiful idea. But the problems it solves were theoretical problems, not observational ones. For example, the apparent fine-tuning of the flatness of the Universe can be traced back to the absence of a theory of initial conditions for the Universe. Inflation turns an initially curved universe into a flat one, but the fact that the Universe appears to be flat doesn’t prove that inflation happened. There are initial conditions that lead to present-day flatness even without the intervention of an inflationary epoch. One might argue that these are special and therefore “improbable”, and consequently that it is more probable that inflation happened than that it didn’t. But on the other hand, without a proper theory of the initial conditions, how can we say which are more probable? Based on this kind of argument alone, we would probably never really know whether we live in an inflationary Universe or not.

But there is another thread in the story of inflation that makes it much more compelling as a scientific theory because it makes direct contact with observations. Although it was not the original motivation for the idea, Guth and others realized very early on that if a scalar field were responsible for inflation then it should be governed by the usual rules governing quantum fields. One of the things that quantum physics tells us is that nothing evolves entirely smoothly. Heisenberg’s famous Uncertainty Principle imposes a degree of unpredictability of the behaviour of the inflaton. The most important ramification of this is that although inflation smooths away any primordial wrinkles in the fabric of space-time, in the process it lays down others of its own. The inflationary wrinkles are really ripples, and are caused by wave-like fluctuations in the density of matter travelling through the Universe like sound waves travelling through air. Without these fluctuations the cosmos would be smooth and featureless, containing no variations in density or pressure and therefore no sound waves. Even if it began in a fireball, such a Universe would be silent. Inflation puts the Bang in Big Bang.

The acoustic oscillations generated by inflation have a broad spectrum (they comprise oscillations with a wide range of wavelengths), they are of small amplitude (about one hundred thousandth of the background); they are spatially random and have Gaussian statistics (like waves on the surface of the sea; this is the most disordered state); they are adiabatic (matter and radiation fluctuate together) and they are formed coherently.  This last point is perhaps the most important. Because inflation happens so rapidly all of the acoustic “modes” are excited at the same time. Hitting a metal pipe with a hammer generates a wide range of sound frequencies, but all the different modes of the start their oscillations at the same time. The result is not just random noise but something moderately tuneful. The Big Bang wasn’t exactly melodic, but there is a discernible relic of the coherent nature of the sound waves in the pattern of cosmic microwave temperature fluctuations seen in the Cosmic Microwave Background. The acoustic peaks seen in the  Planck  angular spectrum  provide compelling evidence that whatever generated the pattern did so coherently.

Planck_power_spectrum_orig
There are very few alternative theories on the table that are capable of reproducing these results, but does this mean that inflation really happened? Do they “prove” inflation is correct? More generally, is the idea of inflation even testable?

So did inflation really happen? Does Planck prove it? Will we ever know?

It is difficult to talk sensibly about scientific proof of phenomena that are so far removed from everyday experience. At what level can we prove anything in astronomy, even on the relatively small scale of the Solar System? We all accept that the Earth goes around the Sun, but do we really even know for sure that the Universe is expanding? I would say that the latter hypothesis has survived so many tests and is consistent with so many other aspects of cosmology that it has become, for pragmatic reasons, an indispensable part our world view. I would hesitate, though, to say that it was proven beyond all reasonable doubt. The same goes for inflation. It is a beautiful idea that fits snugly within the standard cosmological and binds many parts of it together. But that doesn’t necessarily make it true. Many theories are beautiful, but that is not sufficient to prove them right.

When generating theoretical ideas scientists should be fearlessly radical, but when it comes to interpreting evidence we should all be unflinchingly conservative. The Planck measurements have also provided a tantalizing glimpse into the future of cosmology, and yet more stringent tests of the standard framework that currently underpins it. Primordial fluctuations produce not only a pattern of temperature variations over the sky, but also a corresponding pattern of polarization. This is fiendishly difficult to measure, partly because it is such a weak signal (only a few percent of the temperature signal) and partly because the primordial microwaves are heavily polluted by polarized radiation from our own Galaxy. Polarization data from Planck are yet to be released; the fiendish data analysis challenge involved is the reason for the delay.  But there is a crucial target that justifies these endeavours. Inflation does not just produce acoustic waves, it also generates different modes of fluctuation, called gravitational waves, that involve twisting deformations of space-time. Inflationary models connect the properties of acoustic and gravitational fluctuations so if the latter can be detected the implications for the theory are profound. Gravitational waves produce very particular form of polarization pattern (called the B-mode) which can’t be generated by acoustic waves so this seems a promising way to test inflation. Unfortunately the B-mode signal is expected to be very weak and the experience of WMAP suggests it might be swamped by foregrounds. But it is definitely worth a go, because it would add considerably to the evidence in favour of inflation as an element of physical reality.

But would even detection of primordial gravitational waves really test inflation? Not really. The problem with inflation is that it is a name given to a very general idea, and there are many (perhaps infinitely many) different ways of implementing the details, so one can devise versions of the inflationary scenario that produce a wide range of outcomes. It is therefore unlikely that there will be a magic bullet that will kill inflation dead. What is more likely is a gradual process of reducing the theoretical slack as much as possible with observational data, such as is happening in particle physics. For example, we have not yet identified the inflaton field (nor indeed any reasonable candidate for it) but we are gradually improving constraints on the allowed parameter space. Progress in this mode of science is evolutionary not revolutionary.

Many critics of inflation argue that it is not a scientific theory because it is not falsifiable. I don’t think falsifiability is a useful concept in this context; see my many posts relating to Karl Popper. Testability is a more appropriate criterion. What matters is that we have a systematic way of deciding which of a set of competing models is the best when it comes to confrontation with data. In the case of inflation we simply don’t have a compelling model to test it against. For the time being therefore, like it or not, cosmic inflation is clearly the best model we have. Maybe someday a worthy challenger will enter the arena, but this has not happened yet.

Most working cosmologists are as aware of the difficulty of testing inflation as they are of its elegance. There are also those  who talk as if inflation were an absolute truth, and those who assert that it is not a proper scientific theory (because it isn’t falsifiable). I can’t agree with either of these factions. The truth is that we don’t know how the Universe really began; we just work on the best ideas available and try to reduce our level of ignorance in any way we can. We can hardly expect  the secrets of the Universe to be so easily accessible to our little monkey brains.

Follow

Get every new post delivered to your Inbox.

Join 3,804 other followers