Has BICEP2 bitten the dust?

Time for yet another update on twists and turns of the ongoing saga of  BICEP2 and in particular the growing suspicion that the measurements could be accounted for by Galactic dust rather than primordial gravitational waves; see various posts on this blog.

First there is a Nature News and Views article by Paul Steinhardt with the title Big Bang blunder bursts the multiverse bubble. As the title suggests, this piece is pretty scathing about the whole affair, for two main reasons. The first is to do with the manner of the release of the result via a press conference before the results had been subjected to peer review. Steinhardt argues that future announcements of “discoveries” in this area

should be made after submission to journals and vetting by expert referees. If there must be a press conference, hopefully the scientific community and the media will demand that it is accompanied by a complete set of documents, including details of the systematic analysis and sufficient data to enable objective verification.

I also have reservations about the way the communication of this result was handled but I wouldn’t go as far as Steinhardt did. I think it’s quite clear that the BICEP2 team have detected something and that they published their findings in good faith. The fact that the media pushed the result as being a definitive detection of primordial gravitational waves wasn’t entirely their fault; most of the hype was probably down to other cosmologists (especially theorists) who got a bit over-excited.

It is true that if it turns out that the BICEP2 signal is due to dust rather than primordial gravitational waves then the cosmology community will have a certain amount of egg on its face. On the other hand, this is actually what happens in science all the time. If we scientists want the general public to understand better how science actually works we should not pretend that it is about absolute certainties but that it is a process, and because it is a process operated by human beings it is sometimes rather messy. The lesson to be learned is not about hiding the mess from the public but about communicating the uncertainties more accurately and more honestly.

Steinhardt’s other main point is one with which I disagree very strongly. Here is the core of his argument about inflation:

The common view is that it is a highly predictive theory. If that was the case and the detection of gravitational waves was the ‘smoking gun’ proof of inflation, one would think that non-detection means that the theory fails. Such is the nature of normal science. Yet some proponents of inflation who celebrated the BICEP2 announcement already insist that the theory is equally valid whether or not gravitational waves are detected. How is this possible?

The answer given by proponents is alarming: the inflationary paradigm is so flexible that it is immune to experimental and observational tests.

This is extremely disingenuous. There’s a real difference between a theory that is “immune to experimental and observational tests” and one which is just very difficult to test in that way. For a start, the failure of a given experiment to detect gravitational waves  does not prove that gravitational waves don’t exist at some level; a more sensitive experiment might be needed. More generally, the inflationary paradigm is not completely specified as a theory; it is a complex entity which contains a number of free parameters that can be adjusted in the light of empirical data. The same is also true, for example, of the standard model of particle physics. The presence of these adjustable degrees of freedom makes it much harder to test the hypothesis than would be the case if there were no such wiggle room. Normal science often proceeds via the progressive tightening of the theoretical slack until there is no more room for manoeuvre. This process can take some time.

Inflation will probably be very difficult to test, but then there’s no reason why we should expect a definitive theoretical understanding of the very early Universe to come easily to us. Indeed, there is almost certainly a limit to the extent that we can understand the Universe with “normal science” but I don’t think we’ve reached it yet. We need to be more patient. So what if we can’t test inflation with our current technology? That doesn’t mean that the idea is unscientific. It just means that the Universe is playing hard to get.

Steinhardt continues with an argument about the multiverse. He states that inflation

almost inevitably leads to a multiverse with an infinite number of bubbles, in which the cosmic and physical properties vary from bubble to bubble. The part of the multiverse that we observe corresponds to a piece of just one such bubble. Scanning over all possible bubbles in the multi­verse, every­thing that can physically happen does happen an infinite number of times. No experiment can rule out a theory that allows for all possible outcomes. Hence, the paradigm of inflation is unfalsifiable.

This may seem confusing given the hundreds of theoretical papers on the predictions of this or that inflationary model. What these papers typically fail to acknowledge is that they ignore the multiverse and that, even with this unjustified choice, there exists a spectrum of other models which produce all manner of diverse cosmological outcomes. Taking this into account, it is clear that the inflationary paradigm is fundamentally untestable, and hence scientifically meaningless.

I don’t accept the argument that “inflation almost inevitably leads to a multiverse” but even if you do the rest of the argument is false. Infinitely many outcomes may be possible, but are they equally probable? There is a well-defined Bayesian framework within which one could answer this question, with sufficient understanding of the underlying physics. I don’t think we know how to do this yet but that doesn’t mean that it can’t be done in principle.

For similar discussion of this issue see Ted Bunn’s Blog.

Steinhardt’s diatribe was accompanied  yesterday by a sceptical news piece in the Grauniad entitled Gravitational waves turn to dust after claims of flawed analysis. This piece is basically a rehash of the argument that the BICEP2 results may be accounted for by dust rather than primordial gravitational waves, which definitely a possibility, and that the BICEP2 analysis involved a fairly dubious analysis of the foregrounds. In my opinion it’s an unnecessarily aggressive piece, but mentioning it here gives me the excuse to post the following screen grab from the science section of today’s Guardian website:

BICEP_thenandnow

Aficionados of Private Eye will probably think of the Just Fancy That section!

Where do I stand? I can hear you all asking that question so I’ll make it clear that my view hasn’t really changed at all since March. I wouldn’t offer any more than even money on a bet that BICEP2 has detected primordial gravitational waves at all and I’d offer good odds that, if the detection does stand, the value of the tensor-to-scalar ratio is significantly lower than the value of 0.2 claimed by BICEP2.  In other words, I don’t know. Sometimes that’s the only really accurate statement a scientist can make.

11 Responses to “Has BICEP2 bitten the dust?”

  1. Nic Ross Says:

    Hi Peter,

    Can I ask your opinion on a BICEP2-related, but non-science issue?

    Leaving aside the level of (non)-detection of the gravitational wave signature in the CMB data, one thing that I’ve found interesting, and has slightly riled me, is the flood of theory papers on e.g. the large value of r, and/or the tension with Planck, and/or the implication for various flavours and models of inflation.

    It seems to me if the BICEP2 results do turn out to be e.g. due to dust foregrounds, then the BICEP collaboration will be vilified, and “BICEP” will become a punchline or example of a cautionary tale (rightly or wrongly). However, there seems to be very little penalty for writing a paper based on (potentially) ‘faulty’ data and then rushing to put it on the arXiv. We saw the same thing for the FTL neutrinos.

    While saying “I don’t know” and seeing scientific measurements and results rigorously challenged and tested (and is happening more and more in real-time and in the social media limelight) is part of the scientific method and process, I’m not convinced that a flood of potentially rushed-out papers is, and it is this quieter part of the current academy that in many ways worries me more…

    Thoughts??

    Yours,
    Nic

  2. David Whitehouse Says:

    Regarding the Guardian piece: The reason it is there is because journalists were given access to Steinhardt’s piece on Monday so it’s unsurprising that it’s basically a rehash of his piece. What is interesting is that of the thousand or so journalists worldwide that got Steinhardt’s piece before it was published in Nature so few of them thought it was a story.

  3. telescoper Says:

    Just a reminder to those wishing to comment that I do not accept comments from anonymous or pseudonymous individuals with fake email addresses…

  4. I have two immediate comments to make. The first is that the BICEP2 press conference was titled “First direct evidence of cosmic inflation”, so I don’t think it is really fair to give the press or theorists the blame for the hype. The second is that the main proponent of the idea that inflation and the multiverse are unfalsifiable ideas is Linde (though he views this as an advantage).

    I’d also say that I think it’s very simplistic to claim that a Bayesian framework can solve the measure problem, but that’s probably far too complex an issue for a blog comment …

    • Anton Garrett Says:

      It’s simply the only correct way to tackle uncertainty. Whether the theoretical models and practical data can solve this particular problem is the real question.

      • I’m not disputing the correctness of the Bayesian framework. All the theorists who have tried to apply some sort of prior on the landscape in order to predict outcomes of experiments are working within that framework. That doesn’t mean there’s any obvious sensible way to choose a prior, which is essentially the measure problem. So just pointing out the existence of a Bayesian framework is not very relevant.

      • Anton Garrett Says:

        If the likelihood (ie probability for the data) is sharply peaked wrt the parameter that is being measured then the prior for that parameter barely matters (unless you have info that it too is sharply peaked, which is not going to happen because you did the experiment precisely because you knew little about it). But if the likelihood is broad wrt that parameter then the issue you raise is indeed problematic.

  5. […] Podiumsdiskussion zum Thema (bei 1:16:50 fies), ein Schlagzeilen-Vergleich und Artikel vom 5. Juni hier, hier, hier, hier und hier, 4. Juni, 3. Juni hier, hier, hier und hier, 2. Juni hier und hier, 30. […]

  6. […] In the Dark + The Reference Frame + Francis (th)E […]

  7. brissioni Says:

    I am always a fan of honesty.

  8. telescoper Says:

    I really don’t know why Nature published it, actually. It’s just grumpy drivel.

Leave a comment