Archive for Dark Energy

Dark Energy – Lectures by Varun Sahni

Posted in The Universe and Stuff with tags , , on June 9, 2019 by telescoper

I thought I’d share this lecture course about Dark Energy here. It was delivered by Varun Sahni at an international school on cosmology earlier this year. The material is quite technical in places but I’m sure these lectures will prove a very helpful introduction to, for example, new PhD students in this area. Varun has been a very good friend and colleague of mine for many years, and he is an excellent lecturer!

Here are the three lectures:

The Negative Mass Bug

Posted in Astrohype, Open Access, The Universe and Stuff with tags , , , , , on February 25, 2019 by telescoper

You may have noticed that some time ago I posted about  a paper by Jamie Farnes published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter.

Here is the abstract of said paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

Well there’s a new paper just out on the arXiv by Hector Socas-Navarro with the abstract

A recent work by Farnes (2018) proposed an alternative cosmological model in which both dark matter and dark energy are replaced with a single fluid of negative mass. This paper presents a critical review of that model. A number of problems and discrepancies with observations are identified. For instance, the predicted shape and density of galactic dark matter halos are incorrect. Also, halos would need to be less massive than the baryonic component or they would become gravitationally unstable. Perhaps the most challenging problem in this theory is the presence of a large-scale version of the `runaway’ effect, which would result in all galaxies moving in random directions at nearly the speed of light. Other more general issues regarding negative mass in general relativity are discussed, such as the possibility of time-travel paradoxes.

Among other things there is this:

After initially struggling to reproduce the F18 results, a careful inspection of his source code revealed a subtle bug in the computation of the gravitational acceleration. Unfortunately, the simulations in F18 are seriously compromised by this coding error whose effect is that the gravitational force decreases with the inverse of the distance, instead of the distance squared.

Oh dear.

I don’t think I need go any further into this particular case, which would just rub salt into the wounds of Farnes (2018) but I will make a general comment. Peer review is the best form of quality stamp that we have but, as this case demonstrates, it is by no means flawless. The paper by Farnes (2018) was refereed and published, but is now shown to be wrong*. Just as authors can make mistakes so can referees. I know I’ve screwed up as a referee in the past so I’m not claiming to be better than anyone in saying this.

*This claim is contested: see the comment below.

I don’t think the lesson is that we should just scrap peer review, but I do think we need to be more imaginative about how it is used than just relying on one or two individuals to do it. This case shows that science eventually works, as the error was found and corrected, but that was only possible because the code used by Farnes (2018) was made available for scrutiny. This is not always what happens. I take this as a vindication of open science, and an example of why scientists should share their code and data to enable others to check the results. I’d like to see a system in which papers are not regarded as `final’ documents but things which can be continuously modified in response to independent scrutiny, but that would require a major upheaval in academic practice and is unlikely to happen any time soon.

In this case, in the time since publication there has been a large amount of hype about the Farnes (2018) paper, and it’s unlikely that any of the media who carried stories about the results therein will ever publish retractions. This episode does therefore illustrate the potentially damaging effect on public trust that the excessive thirst for publicity can have. So how do we balance open science against the likelihood that wrong results will be taken up by the media before the errors are found? I wish I knew!

The Future Circular Collider: what’s the MacGuffin?

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , on February 7, 2019 by telescoper

I’ve been reading a few items here and there about proposals for a Future Circular Collider, even larger than the Large Hadron Collider (and consequently even more expensive). No doubt particle physicists interested in accelerator experiments will be convinced this is the right move, but of course there are other projects competing for funds and it’s by no means certain that the FCC will actually happen.

One of the important things about `Big Science’ when it gets this big is that it has to capture the imagination of people with political influence if it is to be granted funding. Based on past experience that means that there has to be a Big Discovery to be made or a Big Idea to be tested. This Big Thing has to be simple enough for politicians to understand and exciting enough to capture their imagination (and that of the public). In the case of the Large Hadron Collider (LHC), for example, this was the Higgs Boson. In the case of the Euclid space mission, the motivation is Dark Energy.

The Big Thing that sells a project to politicians is not necessarily the thing that most scientists are interested in. The LHC has done a lot of things other than discover the Higgs, and Euclid will do many things other than probe Dark Energy, but there has to be one thing to set it all in motion. It seems to me that the Big Question about the FCC is whether there is something specific that can motivate this project in the way the Higgs did for the LHC? If so, what is it?

Answers on a postcard or, better, through the comments box below.

 

Humphrey Bogart with the eponymous Maltese Falcon

Anyway, these thoughts reminded me of the concept of a  MacGuffin. Unpick the plot of any thriller or suspense movie and the chances are that somewhere within it you will find lurking at least one MacGuffin. This might be a tangible thing, such the eponymous sculpture of a Falcon in the archetypal noir classic The Maltese Falcon or it may be rather nebulous, like the “top secret plans” in Hitchcock’s The Thirty Nine Steps. Its true character may be never fully revealed, such as in the case of the glowing contents of the briefcase in Pulp Fiction , which is a classic example of the “undisclosed object” type of MacGuffin, or it may be scarily obvious, like a doomsday machine or some other “Big Dumb Object” you might find in a science fiction thriller.

Or the MacGuffin may not be a real thing at all. It could be an event or an idea or even something that doesn’t actually exist in any sense, such the fictitious decoy character George Kaplan in North by Northwest. In fact North by North West is an example of a movie with more than one MacGuffin. Its convoluted plot involves espionage and the smuggling of what is only cursorily described as “government secrets”. These are the main MacGuffin; George Kaplan is a sort of sub-MacGuffin. But although this is behind the whole story, it is the emerging romance, accidental betrayal and frantic rescue involving the lead characters played by Cary Grant and Eve Marie Saint that really engages the characters and the audience as the film gathers pace. The MacGuffin is a trigger, but it soon fades into the background as other factors take over.

Whatever it is or is not, the MacGuffin is responsible for kick-starting the plot. It makes the characters embark upon the course of action they take as the tale begins to unfold. This plot device was particularly beloved by Alfred Hitchcock (who was responsible for introducing the word to the film industry). Hitchcock was however always at pains to ensure that the MacGuffin never played as an important a role in the mind of the audience as it did for the protagonists. As the plot twists and turns – as it usually does in such films – and its own momentum carries the story forward, the importance of the MacGuffin tends to fade, and by the end we have usually often forgotten all about it. Hitchcock’s movies rarely bother to explain their MacGuffin(s) in much detail and they often confuse the issue even further by mixing genuine MacGuffins with mere red herrings.

Here is the man himself explaining the concept at the beginning of this clip. (The rest of the interview is also enjoyable, convering such diverse topics as laxatives, ravens and nudity..)

There’s nothing particular new about the idea of a MacGuffin. I suppose the ultimate example is the Holy Grail in the tales of King Arthur and the Knights of the Round Table in which the Grail itself is basically a peg on which to hang a series of otherwise disconnected stories. It is barely mentioned once each individual story has started and, of course, is never found. That’s often how it goes with MacGuffins -even the Maltese Falcon turned out in the end to be a fake – they’re only really needed to start things off.

So let me rephrase the question I posed earlier on. In the case of the Future Circular Collider, what’s the MacGuffin?

Negative Mass, Phlogiston and the State of Modern Cosmology

Posted in Astrohype, The Universe and Stuff with tags , , on December 7, 2018 by telescoper

A graphical representation of something or other.

I’ve noticed a modest amount of hype – much of it gibberish – going around about a paper published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter. Here is the abstract of the paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

For a skeptical commentary on this work, see here.

The idea of negative mass is no by no means new, of course. If you had asked a seventeenth century scientist the question “what happens when something burns?”  the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning separated the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight implying an increase in mass of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative mass. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, `levity’. Nowadays we would probably say `anti-gravity.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

The standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of “dark energy”. We don’t know much about what this is, except that in order to make our current understanding work out it has to act like a source of anti-gravity. It does this by violating the strong energy condition of general relativity.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe.

A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

I’ve blogged before, with some levity of my own, about how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists.

Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe, as the paper that prompted this piece might be taken to suggest, the dark energy really is something like phlogiston. At least I prefer the name to quintessence. However, I think the author has missed a trick. I think to create a properly trendy cosmological theory he should include the concept of supersymmetry, according to which there should be a Fermionic counterpart of phlogiston called the phlogistino..

Strong constraints on cosmological gravity from GW170817 and GRB 170817A

Posted in The Universe and Stuff with tags , , , , , on October 24, 2017 by telescoper

One of the many interesting scientific results to emerge from last week’s announcement of a gravitational wave source (GW170817) with an electromagnetic counterpart (GRB 170817A) is the fact that it provides constraints on departures from Einstein’s General Theory of Relativity. In particular the (lack of a) time delay between the arrival of the gravitational and electromagnetic signals can be used to rule out models that predict that gravitational waves and electromagnetic waves travel with different speeds. The fractional time delay associated with this source is constrained to be less than 10-17 which actually rules out many of the proposed alternatives to general relativity. Modifications of Einstein’s gravity have been proposed for a number of reasons, including the desire to explain the dynamics of the expanding Universe without the need for Dark Energy or Dark Matter (or other exotica), but many of these are now effectively dead.

Anyway, I bookmarked a nice paper about this last week while I was in India but forgot to post it then, so if you’re interested in reading more about this have a look at this arXiv paper by Baker et al., which has the following abstract:

The detection of an electromagnetic counterpart (GRB 170817A) to the gravitational wave signal (GW170817) from the merger of two neutron stars opens a completely new arena for testing theories of gravity. We show that this measurement allows us to place stringent constraints on general scalar-tensor and vector-tensor theories, while allowing us to place an independent bound on the graviton mass in bimetric theories of gravity. These constraints severely reduce the viable range of cosmological models that have been proposed as alternatives to general relativistic cosmology.

The Great Dark Energy Poll

Posted in The Universe and Stuff with tags , , on June 8, 2017 by telescoper

Yesterday was a very busy day: up early to check out of my hotel and head to the third day of the Euclid Consortium meeting for the morning session, then across to the Institute of Physics for a Diversity and Inclusion Panel meeting, then back to the Euclid Consortium meeting for the last session of the day, then introducing the two speakers at the evening event, then to Paddington for the 7.15 train back to Cardiff. I was not inconsiderably tired when I got home.

I had to bale out of the evening session to get the train I was booked on, but it seemed to be going well. Before I left, Ofer Lahav asked for an informal show of hands about a few possibilities relating to the nature of Dark Energy. Since today is polling day for the 2017 General Election, I thought it might be a good idea to distract people from politics for a bit by running a similar poll on here.

There are lots of possibilities for what dark energy may turn out to be, but I’ve decided to allow only six broad classes into which most candidate explanations can be grouped:

  1. The cosmological constant, originally introduced as a modification of the left hand side of Einstein’s general theory of relativity – the side that describes gravity – but more often regarded nowadays as a modification of the right-hand-side representing a vacuum energy. Whichever interpretation you make of this, its defining characteristic  is that it is constant.
  2.  Modified gravity,  in other words some modification of the left-hand-side of Einstein’s equations that manifests itself cosmologically which is more complicated than the cosmological constant.
  3. Dynamical dark energy, i.e. some other modification of the energy-momentum tensor on the right-hand side of Einstein’s equation that looks like some form of “stuff” that varies dynamically rather than being cosmologically constant.
  4.  Violation of the cosmological principle by the presence of large-scale inhomogeneities which result in significant departures from the usual Friedman-Robertson-Walker description within which the presence of dark energy is
  5. Observational error, by which I mean that there is no dark energy at all: its presence is inferred erroneously on the basis of flawed measurements, e.g. failure to account for systematics.
  6.  Some other explanation – this would include the possibility that the entire standard cosmological framework is wrong and we’re looking at the whole thing from the wrong point of view. If you choose this option you might want to comment through the box below what you have in mind.

Well, there are the six candidates. Make your choice:

Gravity in the Quantum Vacuum

Posted in The Universe and Stuff with tags , , , , , on May 18, 2017 by telescoper

Yeterday I noticed an interesting paper which has been on the arXiv for a few months but which has just been published in Physical Review D and has been highlighted by the Editors of that esteemed journal. The authors are Qingdi Wang, Zhen Zhu and Bill Unruh – all of them from the University of British Columbia in Vancouver.

Here is the abstract:

You can click on the image if it is too small to read. As you will see it suggests that we may have been thinking about the effect of vacuum energy in completely the wrong way in the context of Dark Energy.

It’s a long paper (35 pages) which I haven’t had time to work through completely yet, and I don’t know whether it will stand up. I have to say, though, that I’ve long left that the problem of dark energy will only be solved by a fundamental reappraisal of the underlying physics, rather than adding new fields or other such contrivances.

I’d be interested in comments from people who have read the paper thoroughly. I’m flying back to Blighty this evening so I hope I can study the article more thoroughly on the plane.

Is there a kinematic backreaction in cosmology?

Posted in The Universe and Stuff with tags , , , , , on March 28, 2017 by telescoper

I just noticed that a paper has appeared on the arXiv with the confident title There is no kinematic backreaction. Normally one can be skeptical about such bold claims, but this one is written by Nick Kaiser and he’s very rarely wrong…

The article has a very clear abstract:

Kaiser

This is an important point of debate, because the inference that the universe is dominated by dark energy (i.e. some component of the cosmic energy density that violates the strong energy condition) relies on the assumption that the distribution of matter is homogeneous and isotropic (i.e. that the Universe obeys the Cosmological Principle). Added to the assumption that the large-scale dynamics of the Universe are described by the general theory of relativity, this means that we evolution of the cosmos is described by the Friedmann equations. It is by comparison with the Friedmann equations that we can infer the existence of dark energy from the apparent change in the cosmic expansion rate over time.

But the Cosmological Principle can only be true in an approximate sense, on very large scales, as the universe does contain galaxies, clusters and superclusters. It has been a topic of some discussion over the past few years as to whether the formation of cosmic structure may influence the expansion rate by requiring extra terms that do not appear in the Friedmann equations.

Nick Kaiser says `no’. It’s a succinct and nicely argued paper but it is entirely Newtonian. It seems to me that if you accept that his argument is correct then the only way you can maintain that backreaction can be significant is by asserting that it is something intrinsically relativistic that is not covered by a Newtonian argument. Since all the relevant velocities are much less than that of light and the metric perturbations generated by density perturbations are small (~10-5) this seems a hard case to argue.

I’d be interested in receiving backreactions to this paper via the comments box below.

One Hundred Years of the Cosmological Constant

Posted in History, The Universe and Stuff with tags , , , , , , on February 8, 2017 by telescoper

It was exactly one hundred years ago today – on 8th February 1917 – that a paper was published in which Albert Einstein explored the cosmological consequences of his general theory of relativity, in the course of which he introduced the concept of the cosmological constant.

For the record the full reference to the paper is: Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie and it was published in the Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften. You can find the full text of the paper here. There’s also a nice recent discussion of it by Cormac O’Raifeartaigh  and others on the arXiv here.

Here is the first page:

cosmo

It’s well worth looking at this paper – even if your German is as rudimentary as mine – because the argument Einstein constructs is rather different from what you might imagine (or at least that’s what I thought when I first read it). As you see, it begins with a discussion of a modification of Poisson’s equation for gravity.

As is well known, Einstein introduced the cosmological constant in order to construct a static model of the Universe. The 1917 paper pre-dates the work of Friedman (1923) and Lemaître (1927) that established much of the language and formalism used to describe cosmological models nowadays, so I thought it might be interesting just to recapitulate the idea using modern notation. Actually, in honour of the impending centenary I did this briefly in my lecture on Physics of the Early Universe yesterday.

To simplify matters I’ll just consider a “dust” model, in which pressure can be neglected. In this case, the essential equations governing a cosmological model satisfying the Cosmological Principle are:

\ddot{a} = -\frac{4\pi G \rho a }{3} +\frac{\Lambda a}{3}

and

\dot{a}^2= \frac{8\pi G \rho a^2}{3} +\frac{\Lambda a^2}{3} - kc^2.

In these equations a(t) is the cosmic scale factor (which measures the relative size of the Universe) and dots are derivatives with respect to cosmological proper time, t. The density of matter is \rho>0 and the cosmological constant is \Lambda. The quantity k is the curvature of the spatial sections of the model, i.e. the surfaces on which t is constant.

Now our task is to find a solution of these equations with a(t)= A, say, constant for all time, i.e. that \dot{a}=0 and \ddot{a}=0 for all time.

The first thing to notice is that if \Lambda=0 then this is impossible. One can solve the second equation to make the LHS zero at a particular time by matching the density term to the curvature term, but that only makes a universe that is instantaneously static. The second derivative is non-zero in this case so the system inevitably evolves away from the situation in which $\dot{a}=0$.

With the cosmological constant term included, it is a different story. First make \ddot{a}=0  in the first equation, which means that

\Lambda=4\pi G \rho.

Now we can make \dot{a}=0 in the second equation by setting

\Lambda a^2 = 4\pi G \rho a^2 = kc^2

This gives a static universe model, usually called the Einstein universe. Notice that the curvature must be positive, so this a universe of finite spatial extent but with infinite duration.

This idea formed the basis of Einstein’s own cosmological thinking until the early 1930s when observations began to make it clear that the universe was not static at all, but expanding. In that light it seems that adding the cosmological constant wasn’t really justified, and it is often said that Einstein regard its introduction as his “biggest blunder”.

I have two responses to that. One is that general relativity, when combined with the cosmological principle, but without the cosmological constant, requires the universe to be dynamical rather than static. If anything, therefore, you could argue that Einstein’s biggest blunder was to have failed to predict the expansion of the Universe!

The other response is that, far from it being an ad hoc modification of his theory, there are actually sound mathematical reasons for allowing the cosmological constant term. Although Einstein’s original motivation for considering this possibility may have been misguided, he was justified in introducing it. He was right if, perhaps, for the wrong reasons. Nowadays observational evidence suggests that the expansion of the universe may be accelerating. The first equation above tells you that this is only possible if \Lambda\neq 0.

Finally, I’ll just mention another thing in the light of the Einstein (1917) paper. It is clear that Einstein thought of the cosmological as a modification of the left hand side of the field equations of general relativity, i.e. the part that expresses the effect of gravity through the curvature of space-time. Nowadays we tend to think of it instead as a peculiar form of energy (called dark energy) that has negative pressure. This sits on the right hand side of the field equations instead of the left so is not so much a modification of the law of gravity as an exotic form of energy. You can see the details in an older post here.

A Non-accelerating Universe?

Posted in Astrohype, The Universe and Stuff with tags , , , , , on October 26, 2016 by telescoper

There’s been quite a lot of reaction on the interwebs over the last few days much of it very misleading; here’s a sensible account) to a paper by Nielsen, Guffanti and Sarkar which has just been published online in Scientific Reports, an offshoot of Nature. I think the above link should take you an “open access” version of the paper but if it doesn’t you can find the arXiv version here. I haven’t cross-checked the two versions so the arXiv one may differ slightly.

Anyway, here is the abstract:

The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.

Obviously I haven’t been able to repeat the statistical analysis but I’ve skimmed over what they’ve done and as far as I can tell it looks a fairly sensible piece of work (although it is a frequentist analysis). Here is the telling plot (from the Nature version)  in terms of the dark energy (y-axis) and matter (x-axis) density parameters:

lambda

Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter (a special case is the origin on the plot, which is called the Milne model and represents an entirely empty universe). The contours show “1, 2 and 3σ” contours, regarding all other parameters as nuisance parameters. It is true that the line of no acceleration does go inside the 3σcontour so in that sense is not entirely inconsistent with the data. On the other hand, the “best fit” (which is at the point Ωm=0.341, ΩΛ=0.569) does represent an accelerating universe.

I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. The CMB, for example, is particularly sensitive to spatial curvature which, measurements tells us, must be close to zero. The Milne model, on the other hand, has a large (negative) spatial curvature entirely excluded by CMB observations. Curvature is regarded as a “nuisance parameter” in the above diagram.

I think this paper is a worthwhile exercise. Subir Sarkar (one of the authors) in particular has devoted a lot of energy to questioning the standard ΛCDM model which far too many others accept unquestioningly. That’s a noble thing to do, and it is an essential part of the scientific method, but this paper only looks at one part of an interlocking picture. The strongest evidence comes from the cosmic microwave background and despite this reanalysis I feel the supernovae measurements still provide a powerful corroboration of the standard cosmology.

Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework, and statistics can’t help us with that!