A Non-accelerating Universe?

There’s been quite a lot of reaction on the interwebs over the last few days much of it very misleading; here’s a sensible account) to a paper by Nielsen, Guffanti and Sarkar which has just been published online in Scientific Reports, an offshoot of Nature. I think the above link should take you an “open access” version of the paper but if it doesn’t you can find the arXiv version here. I haven’t cross-checked the two versions so the arXiv one may differ slightly.

Anyway, here is the abstract:

The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.

Obviously I haven’t been able to repeat the statistical analysis but I’ve skimmed over what they’ve done and as far as I can tell it looks a fairly sensible piece of work (although it is a frequentist analysis). Here is the telling plot (from the Nature version)  in terms of the dark energy (y-axis) and matter (x-axis) density parameters:

lambda

Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter (a special case is the origin on the plot, which is called the Milne model and represents an entirely empty universe). The contours show “1, 2 and 3σ” contours, regarding all other parameters as nuisance parameters. It is true that the line of no acceleration does go inside the 3σcontour so in that sense is not entirely inconsistent with the data. On the other hand, the “best fit” (which is at the point Ωm=0.341, ΩΛ=0.569) does represent an accelerating universe.

I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. The CMB, for example, is particularly sensitive to spatial curvature which, measurements tells us, must be close to zero. The Milne model, on the other hand, has a large (negative) spatial curvature entirely excluded by CMB observations. Curvature is regarded as a “nuisance parameter” in the above diagram.

I think this paper is a worthwhile exercise. Subir Sarkar (one of the authors) in particular has devoted a lot of energy to questioning the standard ΛCDM model which far too many others accept unquestioningly. That’s a noble thing to do, and it is an essential part of the scientific method, but this paper only looks at one part of an interlocking picture. The strongest evidence comes from the cosmic microwave background and despite this reanalysis I feel the supernovae measurements still provide a powerful corroboration of the standard cosmology.

Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework, and statistics can’t help us with that!

Advertisement

23 Responses to “A Non-accelerating Universe?”

  1. Brian Schmidt Says:

    Clearly I have a bias in this – I do agree CMB, BAOs, and SN all contribute to the picture, and only using SN Ia to make a point is not compelling at this point, independent of what you think of SN Ia.

    I think there is at least one issue with the analysis (see https://davidarnoldrubin.com/2016/10/24/marginal-marginal-evidence-for-cosmic-acceleration-from-type-ia-supernovae/) ,

    also note that the SN do not absolutely require GR to show acceleration, rather only isotropy and homogeneity if one is prepared to expand RW into Taylor series and simply use distance vs redshift – no model… (e.g. http://adsabs.harvard.edu/abs/2008ApJ…677….1D)

    • telescoper Says:

      Yes, quite right. Any metric theory will work. But there needs to be a model for the parameter constraints to make sense.

  2. telescoper Says:

    I left that as an exercise to the reader!

  3. Just to be pedantic: this paper is not published in the journal Nature, but rather in the journal Scientific Reports, which is published by the Nature publishing group. I have to admit I hadn’t heard of this journal before this article…

    • telescoper Says:

      Quite so, I probably should have said “by Nature” rather than “in Nature”…Anyway, I’ve now changed it.

  4. I feel the statistical analysis in this paper is based on a false assumption about the data. If their assumptions were true, the observed decay rate variable x1 would be independent of the redshift in the observed sample. But this is not the case: there is a 4 sigma significance correlation between decay rate and redshift, such that the high redshift fraction has more slowly decaying and hence higher luminosity supernovae. Normally the decay rate vs luminosity calibration would account for this, but the Nielsen etal procedure reduces the effectiveness of the calibration. This allows the Malmquist bias to enter their fit, leading to a reduced evidence for acceleration, since the sign of acceleration is fainter SNe at high redshift.

    • Dear Peter, many thanks for discussing our paper. Our calculation is easily checked using the python script linked to our arXiv posting (http://dx.doi.org/10.5281/zenodo.34487). It also provides the full dataset used, kindly made public by Betoule et al. who have done the most comprehensive ‘JLA’ analysis to date including HST, SDSS II, SNLS and nearby SNe Ia (https://arxiv.org/abs/1401.4064).

      We use *exactly* the same dataset – but unlike JLA (and previous authors) we do not adjust an arbitrary error added to each data point in order to get a ‘constrained \chi^2’ of 1/d.o.f. for the *fit to LCDM*. This may be OK for parameter estimation when one is certain the model is right – it certainly is not correct for model selection! As Alan Heavens emphasises above it is important to get the statistics right and I do not need to tell you why the Maximum Likelihood Estimator is a better approach. We show the consequence of this in the usual \Omega_\Lambda-\Omega_m plane – the best-fit moves to higher values of \Omega_m and the 3 \sigma contour now overlaps with the line separating acceleration from deceleration. You say that does not surprise you … but the 2011 Nobel Prize was awarded “… for the discovery of the accelerating expansion of the universe through observations of distant supernovae”. All we are saying is that is *not true* even with the latest much expanded dataset, if we accept the 5 \sigma standard for a discovery of fundamental importance (like the Higgs boson or gravitational waves).

      To make our calculation analytic we use a Gaussian model for the spread of the (SALT 2) light-curve template parameters x_1 (stretch) and c (colour). As seen in our Fig.1 this is not a bad first approximation. Of course there may well be Malmqvist bias and other selection effects in the data. JLA did attempt to correct for this in their calibration of the decay rate versus luminosity and we used their ‘bias corrected’ m_B values. So Ned Wright is being disingenuous when he asserts that our analysis is “based on a false assumption about the data” because we have not taken into account the redshift dependence of the x_1 and c distributions. In fact we have done *no more and no less* than JLA and anyone else earlier (UNION/2/2.1, PanSTARRS etc).

      Brian Schmidt above cites a blog post by David Rubin who is involved in the recently proposed ‘UNITY’ approach (https://arxiv.org/abs/1507.01602) to account for such effects. This is most welcome – however Rubin et al have *not* shown that doing so undermines our conclusion. In fact they did not mention our paper (which had appeared earlier). It would indeed be remarkable if making such post facto corrections yielded the old result obtained using an incorrect statistic – unless of course one aims to somehow get the “correct answer” (to quote one of our adversial referees)! Yet on his blog Rubin claims: “using the basic model of Nielsen et al., but modeling each category of SN discovery (nearby, SDSS, SNLS, HST) with its own population mean … The chi^2 between, e.g., Milne and LambdaCDM increases from 11.61 to 16.84”. If he is really using our code (rather than the flawed ‘constrained \chi^2’) he ought to check with us that he is doing it right and/or publish a paper with details so others can judge … rather than make unsubstantiated negative remarks about our work on social media. It is called (scientific) etiquette.

      Well I would not normally try to defend a serious scientific result on social media either – however since professional cosmologists are weighing in here I hope you will grant us the right of reply (everything I have said above is on behalf of my coauthors Jeppe Nielsen and Alberto Guffanti as well). Allow me to also make a remark in response to other comments on your post. It astonishes me that so many cosmologists do not realise how much of what they believe to be true depends on the *assumed* model. Only if we assume perfect homogeneity and pressureless ‘dust’ (as was done in the 1930s … before any data!) do we get the simple sum rule: 1 = \Omega_m + \Omega_k + \Omega_\Lambda. The claim that ‘standard rulers’ like the CMB acoustic peak and BAO uniquely fix these parameters by provide complementary constraints to supernovae is based on this sum rule being exactly true. And one then infers that \Lambda is of order H_0^2 where H_0 is 10^{-42} GeV in energy units. This is then interpreted as a vacuum energy density of 8\pi G \Lambda ~ (10^{-12} GeV)4 – which is 10^{60} times smaller than the natural expectation in the Standard Model of particle physics which has worked perfectly up to ~10^3 GeV. Clearly something is *very wrong* with our understanding. We ought to stop and ask if perhaps enlarging the cosmological model (to accommodate inhomogeneities, non-‘dust’ matter etc) might mitigate this conclusion, rather than pursue ‘precision cosmology’ with a model that may be past its sell-by date. Of course the cosmological constant problem will still be with us – but at least we will not be doubling it with a ‘why now?’ problem. I appreciate many people think the concordance cosmology is ‘simple’ – but in fact there is *no physical understanding* at all of why \Lambda ~ H_0^2! Can we please look at alternatives instead of deluding ourselves that we have solved cosmology?! We are at the very start, not the end.

      Sorry for the diatribe – Subir

    • For our response to Ned’s critique of our method (which has also been alluded to by Riess & Scolnic, and amplified on by Rubin & Hayden) please see: https://4gravitons.wordpress.com/2016/11/11/a-response-from-nielsen-guffanti-and-sarkar/#comments

      I have just seen the previous posts by John Peacock and can only agree that cosmology is now at the stage when unknown unknowns are what matter. I would argue further that this has been the case all along – simply because astronomical observations are not the same as controllable laboratory measurements! Both are equally necessary however to make progress in our understanding – so let us aspire to the same level of rigour and intellectual honesty in both arenas.

  5. Although you qualify your comments about using simpler statistics, there is a danger that they are read the wrong way. The more understanding one has, and uses, of the statistical model of the data, the better. As the data themselves get better the more important it is to do the statistics right, and the supernova data are a good case in point. Imagine what we we would conclude about the standard LCDM model by applying the simple chi-squared test to the WMAP microwave background correlation function:

    https://www.researchgate.net/figure/231050652_fig4_Fig-16-Angular-correlation-function-of-the-best-fit-LCDM-model-toy-finite-universe

    LCDM is ‘clearly’ a bad fit. Except that it isn’t, and ‘fancy statistics’ tells you so. Let’s hear it for fancy statistics – it simply means doing it properly, including all the understanding you have of the data and the model.

    • John Peacock Says:

      Alan:

      > As the data themselves get better the more
      > important it is to do the statistics right

      You can also argue the reverse. Beyond a certain point, random errors on a given statistic may be so small that advanced methods are pointless. You might e.g. find a situation where a naive fit estimates a given parameter with an rms error of 1%, whereas a careful use of all the information can reduce this to 0.1%. But if the model being fitted is incorrect so that the estimated parameters are biased by a factor 2, the difference is moot. So I think there are 3 stages: (1) primitive data where the error analysis can be simplistic and right, but where you’re never going to see anything; (2) the glory days, where compelling evidence can be found for new phenomena if you use every fancy tool you can lay your hands on; (3) the post-Rumsfeld regime of realising that true errors are dominated by unknown unknowns. Cosmology was in stage (2) from perhaps 1990-2010, but I fear we’re rapidly heading for (3).

    • John Peacock Says:

      Philip: As data improve from nothing, the first person to claim something ought to be the one who is doing the statistics most carefully, as they’ll be the first to the n-sigma finishing line. But sometimes this chance is missed, and the expansion of the universe is a case in point. Slipher had ample opportunity to demonstrate this with the data he had assembled by 1917, but unfortunately the analysis he actually performed wasn’t quite as powerful as it could have been: see https://arxiv.org/abs/1301.7286

    • John: The much-mocked Donald Rumsfeld made a lot of sense, and you are right that unknown unknowns may ultimately limit what one can learn. You should still do the most sophisticated analysis you can though, since by doing so you are most likely to expose deficiencies in the model, and have the best chance of getting a deeper understanding, and, who knows, turn them into known unknowns that we can deal with, or reveal some new discovery. The chance of erroneously claiming a new discovery is surely higher if you’ve not used all of your understanding of the data and the model.

  6. Toffeenose Says:

    Omega equal one because it is a vacuum solution to GR.

  7. I’m curious about the use of a profile likelihood, rather than marginalising over the other parameters.

    • Dear Alan, that is because ours is a *frequentist* procedure. We have no priors – we use Gaussians to model the distribution of the actual light curve parameters x_1 and c (provided by JLA).

      The Bayesian equivalent of our method has been presented by your IC colleagues who have *confirmed* our results (Shariff et al., “BAHAMAS: new SNIa analysis reveals inconsistencies with standard cosmology”, https://arxiv.org/abs/1510.05954).

      The claim in today’s hatchet job on our work (https://arxiv.org/abs/1610.08972) is that ours is a ‘Bayesian Hierarchical Model’. I hope Roberto Trotta is not offended because it was he and collaborators who presented this model! Rubin & Hayden claim they “demonstrate errors in (our) analysis” but in fact they recover our answer more-or-less (3.1 sigma evidence for acceleration with the JLA data). By making further corrections to this dataset they claim the significance goes up to 4.2 sigma. That would indeed be progress if true (it’d be good if they made their code public as we have). But it is still short of 5 sigma (and begs the question why has the significance increased so little – recall that Riess et al (astro-ph/9805201) claimed 3.9 sigma with just 16+34 supernovae. Let us please be honest that previous analyses of the data were not done correctly and not try to justify the old result. Then we can all make progress in the field.

      I am happy to address any concerns that cosmologists may have. I will not however respond to ill-informed, self-promoting ramblings from those who clearly have nothing better to do!

    • Dear Alan, that’s simply because our analysis is frequentist. The Bayesian equivalent (Bayesian Hierarchical Model) has been presented by your colleague Roberto Trotta (Shariff et al, “BAHAMAS: new SNIa analysis reveals inconsistencies with
      standard cosmology”, https://arxiv.org/abs/1510.05954). They *confirm* our results using the same JLA dataset.

      Strangely enough, Rubin & Hayden (https://arxiv.org/abs/1610.08972) credit us with Roberto’s approach, asserting that we are using the Bayesian Hierarchical Model. They too confirm our findings more-or-less (3.1 sigma for acceleration). They then manage to improve this to 4.2 sigma by making post facto corrections to the data – thus miraculously recovering the result obtained earlier by JLA with the flawed ‘constrained \chi^2 ‘method! Actually JLA had already made corrections for such selection effects (Malmqvist bias) in the data. Even if we accept that this is not double counting, the significance is still not 5 sigma … and begs the question why there has been so little improvement since 1998 when Riess et al (https://arxiv.org/abs/astro-ph/9805201) claimed 3.9 sigma with just 50 supernovae?

      Riess too claims (https://blogs.scientificamerican.com/guest-blog/no-astronomers-haven-t-decided-dark-energy-is-nonexistent/) that we “assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the
      same, even though they have been shown to be different and past analyses have accounted for these differences” referring to the JLA paper (of which he was a coauthor). But in fact we are using exactly the same dataset (http://supernovae.in2p3.fr/sdss_snls_jla/ReadMe.html) which incorporates such corrections already!

      My collaborators Jeppe & Alberto and I are happy to answer any questions from professional cosmologists such as yourself. We will not however respond to ill-informed and self-promoting ramblings by people who clearly have nothing better to do!

      Subir

      PS: Amusingly Rubin & Hayden also assert that there is “~ 75 sigma evidence for positive Omega_Lambda”. I hope you’ll agree that If *any* hypothesis can be established at such significance then that would be perhaps an even more remarkable claim!

  8. Moncy Vilavinal John Says:

    Dear All, I would like to call your attention to a related new paper that appeared today in the arXiv https://arxiv.org/abs/1610.09885

  9. I finally got around to blogging about this today

    “Has dark energy had its day?”

    https://thecuriousastronomer.wordpress.com/2016/11/03/has-dark-energy-had-its-day/

  10. Moncy Vilavinal John Says:

    A common misconception is that `no acceleration’ means Milne-type empty universe. This is not correct. A nonempty flat universe, with both matter (66.6%) and dark energy (33.3%) can also expand with no acceleration. See a recent paper
    [1610.09885] Realistic coasting cosmology from the Milne model .

    Use of Bayesian theory in comparing cosmological models is now widely discussed. It may be interesting to know that such a work was done way back in 2001 itself and published in a Physical Review D paper. The result was the same as the recent claim: that the supernova data do not provide strong evidence in favour of an accelerating universe, when compared to a `non-accelerating coasting model’ while using Bayesian theory.
    http://journals.aps.org/prd/abstract/10.1103/PhysRevD.65.043506

    This result was confirmed again in The Astrophysical Journal (ApJ) in 2005.
    http://iopscience.iop.org/article/10.1086/432111/meta

    The abstract of this paper tells it all:

    In this paper, using a significantly improved version of the model-independent, cosmographic approach to cosmology, we address an important question: was there a decelerating past for the universe? To answer this, Bayes’s probability theory is employed, which is the most appropriate tool for quantifying our knowledge when it changes through the acquisition of new data. The cosmographic approach helps to sort out the models in which the universe was always accelerating from those in which it decelerated for at least some time in the period of interest. The Bayesian model comparison technique is used to discriminate these rival hypotheses with the aid of recent releases of supernova data. We also attempt to provide and improve another example of Bayesian model comparison, performed between some Friedmann models, using the same data. Our conclusion, which is consistent with other approaches, is that the apparent magnitude-redshift data alone cannot discriminate these competing hypotheses. We also argue that the lessons learned using Bayesian theory are extremely valuable to avoid frequent U-turns in cosmology.

    • telescoper Says:

      This model (“The coasting model) has been discussed for over 50 years, not just since 2001. The main problem with it is that it just doesn’t fit the data.

      It’s also the case that the coasting phase of the model is transient. We have to live at the precise point when the acceleration is zero.

    • telescoper Says:

      This model (“The coasting model) has been discussed for over 50 years, not just since 2001. The main problem with it is that it just doesn’t fit the totality of the data.

      It’s also the case that the coasting phase of the model is transient. We have to live at the precise point when the acceleration is zero.

      • Moncy Vilavinal John Says:

        Thank you for the comment. About the Milne model, it is correct – it is discussed for more than half a century. I was mentioning this misconception that a constant expansion rate is possible always only in a Milne model.
        About the second part of the comment – I wonder why you believe that we must have such a synchronicity problem? Avelino and Kirshner just says that there is synchronicity problem in the Lambda-CDM model.

      • Moncy Vilavinal John Says:

        But I hope you will agree that it is by trying to solve such coincidence problems, rather than simply living with it, that science has grown.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: