## A Non-accelerating Universe?

There’s been quite a lot of reaction on the interwebs over the last few days much of it very misleading; here’s a sensible account) to a paper by Nielsen, Guffanti and Sarkar which has just been published online in Scientific Reports, an offshoot of Nature. I think the above link should take you an “open access” version of the paper but if it doesn’t you can find the arXiv version here. I haven’t cross-checked the two versions so the arXiv one may differ slightly.

Anyway, here is the abstract:

The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.

Obviously I haven’t been able to repeat the statistical analysis but I’ve skimmed over what they’ve done and as far as I can tell it looks a fairly sensible piece of work (although it is a frequentist analysis). Here is the telling plot (from the Nature version)  in terms of the dark energy (y-axis) and matter (x-axis) density parameters:

Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter (a special case is the origin on the plot, which is called the Milne model and represents an entirely empty universe). The contours show “1, 2 and 3σ” contours, regarding all other parameters as nuisance parameters. It is true that the line of no acceleration does go inside the 3σcontour so in that sense is not entirely inconsistent with the data. On the other hand, the “best fit” (which is at the point Ωm=0.341, ΩΛ=0.569) does represent an accelerating universe.

I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. The CMB, for example, is particularly sensitive to spatial curvature which, measurements tells us, must be close to zero. The Milne model, on the other hand, has a large (negative) spatial curvature entirely excluded by CMB observations. Curvature is regarded as a “nuisance parameter” in the above diagram.

I think this paper is a worthwhile exercise. Subir Sarkar (one of the authors) in particular has devoted a lot of energy to questioning the standard ΛCDM model which far too many others accept unquestioningly. That’s a noble thing to do, and it is an essential part of the scientific method, but this paper only looks at one part of an interlocking picture. The strongest evidence comes from the cosmic microwave background and despite this reanalysis I feel the supernovae measurements still provide a powerful corroboration of the standard cosmology.

Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework, and statistics can’t help us with that!

### 55 Responses to “A Non-accelerating Universe?”

1. Brian Schmidt Says:

Clearly I have a bias in this – I do agree CMB, BAOs, and SN all contribute to the picture, and only using SN Ia to make a point is not compelling at this point, independent of what you think of SN Ia.

I think there is at least one issue with the analysis (see https://davidarnoldrubin.com/2016/10/24/marginal-marginal-evidence-for-cosmic-acceleration-from-type-ia-supernovae/) ,

also note that the SN do not absolutely require GR to show acceleration, rather only isotropy and homogeneity if one is prepared to expand RW into Taylor series and simply use distance vs redshift – no model… (e.g. http://adsabs.harvard.edu/abs/2008ApJ…677….1D)

• telescoper Says:

Yes, quite right. Any metric theory will work. But there needs to be a model for the parameter constraints to make sense.

• ADS URLs will not work properly out of the box in WordPress. However, even cutting and pasting the URL from “http” through “1D” doesn’t work either. 😦

• Almost. Try this.

• And thereby hangs a tale.* At a symposium in conjunction with the IAU General Assembly in Manchester in 2000, Ruth Daly (first author of the paper Brian pointed us to above) showed some constraints in the lambda-Omega plane which favoured the concordance model but didn’t actually completely rule out the Einstein-de Sitter model. Ever the devil’s advocate, Jim Peebles asked if there was enough room for him to live in that corner of parameter space. Almost immediately, Bob Kirshner cried out from the audience “But you would be alone!”

——–
*I almost wrote “And thereby hangs a tail”, but I have an excuse. When I think of Bob Kirshner, I often think of Fess Parker playing Daniel Boone in the eponymous 1960s television series (and am the only person in the observable universe who does so, at least that has been the case until now). This is because Kirshner presumably comes from the German “Kürschner”, which means “furrier” (as in the profession involving making clothing from animals, not as in “more furry”!) and I always picture Daniel Boone (distant relative of 1950s crooner Pat Boone) wearing his coonskin cap. 🙂 The French word is “pelletier”, and there is a Dutch astronomer named Peletier.

2. “I think the above link should take you an “open access” version of the paper”

Indeed it does. I’m showing my age by assuming that Nature implies “not publicly available”. 😐

3. “Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter”

Old astronomers will recognize this line as q=0. q>=Ωm/2-ΩΛ.

Since this paper has a statistical bent, remember that old statisticians never die—they are just broken down by age and sex. 🙂

• q=Ωm/2-ΩΛ.

4. “I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. “

Very true, and very good, and ignored by the “time to rewrite the cosmology books” pundits, but overkill, really. All one needs is the value of Omega from Coles and Ellis (not much has changed since then) and the supernova stuff to conclude that the universe is accelerating (and the somewhat weaker claim that the cosmological constant is positive). Additionally, assuming GR etc, the universe will expand forever, whatever the value of the curvature parameter turns out to be.

5. “Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration.”

True. No conceivable astronomical measurement does, including redshift drift. One can at best argue that they measure it more directly than, say, assuming flatness and the immortal truth of Coles and Ellis (which I think is what Bob Kirshner means when he says that the supernova actually measure the acceleration). One can argue about whether redshift drift is more direct, but it is still not “a direct measurement of acceleration” in any meaningful sense. (It would pretty conclusively rule out almost all other explanations for cosmic redshift, although few take these seriously (and I see no reason why they should)).

6. While good statistics are essential when debating details, simpler (but still correct) statistics and/or justified assumptions are sometimes good enough when the point is whether the result is basically correct. Once convinced of that, simpler statistics and/or additional justified assumptions can be useful in exploring an even larger parameter space.

Bottom line: While one can (and should, in the interest of healthy scepticism) question the supernova results, it is important to realize that they are robust and the main conclusions don’t depend on fancy statistics. (In this connection, see the two MNRAS papers I shamelessly promoted in comments on the previous post. If I got something wrong, criticize it in a paper (preferably in MNRAS. If not, join a still very exclusive club by citing them! 🙂 )

• Although you qualify your comments about using simpler statistics, there is a danger that they are read the wrong way. The more understanding one has, and uses, of the statistical model of the data, the better. As the data themselves get better the more important it is to do the statistics right, and the supernova data are a good case in point. Imagine what we we would conclude about the standard LCDM model by applying the simple chi-squared test to the WMAP microwave background correlation function:

https://www.researchgate.net/figure/231050652_fig4_Fig-16-Angular-correlation-function-of-the-best-fit-LCDM-model-toy-finite-universe

LCDM is ‘clearly’ a bad fit. Except that it isn’t, and ‘fancy statistics’ tells you so. Let’s hear it for fancy statistics – it simply means doing it properly, including all the understanding you have of the data and the model.

• I agree, of course. As Peter and I like to quote, in the words of George McVittie, “the essence of cosmology is statistics”. I certainly hope that no-one is led astray by misinterpreting a blog comment from me, but I think that the probability of that is pretty low. 😐

My point is that everyone makes assumptions. Sometimes they are necessary, if only because including everything would be too much work. (Of course, this changes with time, and cosmology and many other things have benefited from increased computer power.) For example, the paper we are discussing assumes that distances can be calculated from redshift as if the universe were completely homogeneous, even down to the scale of a supernova beam. This is not obvious. I think it is the case, because I did look at it in some detail, but as it stands in the paper it is an assumption (and not even metioned as such). (Various authors have pointed out that it is probably a good assumption, but it does introduce some additional scatter at some level which probably can’t be accounted for in a simple way.)

Of course, it is also an assumption that GR is correct and so on, but we have no evidence that GR is not correct on large scales, while there is much evidence that the universe is not completely homogeneous at solar-system-size scales.

• John Peacock Says:

Alan:

> As the data themselves get better the more
> important it is to do the statistics right

You can also argue the reverse. Beyond a certain point, random errors on a given statistic may be so small that advanced methods are pointless. You might e.g. find a situation where a naive fit estimates a given parameter with an rms error of 1%, whereas a careful use of all the information can reduce this to 0.1%. But if the model being fitted is incorrect so that the estimated parameters are biased by a factor 2, the difference is moot. So I think there are 3 stages: (1) primitive data where the error analysis can be simplistic and right, but where you’re never going to see anything; (2) the glory days, where compelling evidence can be found for new phenomena if you use every fancy tool you can lay your hands on; (3) the post-Rumsfeld regime of realising that true errors are dominated by unknown unknowns. Cosmology was in stage (2) from perhaps 1990-2010, but I fear we’re rapidly heading for (3).

• “So I think there are 3 stages:”

Isn’t there a stage between 1 and 2, where a simple analysis actually shows really interesting stuff? Discovering the expansion of the universe, say?

• John Peacock Says:

Philip: As data improve from nothing, the first person to claim something ought to be the one who is doing the statistics most carefully, as they’ll be the first to the n-sigma finishing line. But sometimes this chance is missed, and the expansion of the universe is a case in point. Slipher had ample opportunity to demonstrate this with the data he had assembled by 1917, but unfortunately the analysis he actually performed wasn’t quite as powerful as it could have been: see https://arxiv.org/abs/1301.7286

• John: The much-mocked Donald Rumsfeld made a lot of sense, and you are right that unknown unknowns may ultimately limit what one can learn. You should still do the most sophisticated analysis you can though, since by doing so you are most likely to expose deficiencies in the model, and have the best chance of getting a deeper understanding, and, who knows, turn them into known unknowns that we can deal with, or reveal some new discovery. The chance of erroneously claiming a new discovery is surely higher if you’ve not used all of your understanding of the data and the model.

• “But sometimes this chance is missed, and the expansion of the universe is a case in point.”

Interesting paper from an interesting proceedings volume (one of the few I have read cover to cover—highly recommended!). My point was that Hubble’s claim of the expansion of the universe was not based on fancy statistics. (Some would claim that they weren’t fancy enough and his claim, though it turned out to be correct, was on shaky foundations. (Actually, for different reasons, as outlined by Sandage in his contribution to a 1993 Saas-Fee course, Hubble doubted the reality of expansion, but not the correlation between apparent magnitude and redshift.))

Of course, Hubble wasn’t actually the first to make the claim, but he publicized his own work (often glossing over the contributions of others).

7. Just to be pedantic: this paper is not published in the journal Nature, but rather in the journal Scientific Reports, which is published by the Nature publishing group. I have to admit I hadn’t heard of this journal before this article…

• telescoper Says:

Quite so, I probably should have said “by Nature” rather than “in Nature”…Anyway, I’ve now changed it.

8. I feel the statistical analysis in this paper is based on a false assumption about the data. If their assumptions were true, the observed decay rate variable x1 would be independent of the redshift in the observed sample. But this is not the case: there is a 4 sigma significance correlation between decay rate and redshift, such that the high redshift fraction has more slowly decaying and hence higher luminosity supernovae. Normally the decay rate vs luminosity calibration would account for this, but the Nielsen etal procedure reduces the effectiveness of the calibration. This allows the Malmquist bias to enter their fit, leading to a reduced evidence for acceleration, since the sign of acceleration is fainter SNe at high redshift.

• Dear Peter, many thanks for discussing our paper. Our calculation is easily checked using the python script linked to our arXiv posting (http://dx.doi.org/10.5281/zenodo.34487). It also provides the full dataset used, kindly made public by Betoule et al. who have done the most comprehensive ‘JLA’ analysis to date including HST, SDSS II, SNLS and nearby SNe Ia (https://arxiv.org/abs/1401.4064).

We use *exactly* the same dataset – but unlike JLA (and previous authors) we do not adjust an arbitrary error added to each data point in order to get a ‘constrained \chi^2’ of 1/d.o.f. for the *fit to LCDM*. This may be OK for parameter estimation when one is certain the model is right – it certainly is not correct for model selection! As Alan Heavens emphasises above it is important to get the statistics right and I do not need to tell you why the Maximum Likelihood Estimator is a better approach. We show the consequence of this in the usual \Omega_\Lambda-\Omega_m plane – the best-fit moves to higher values of \Omega_m and the 3 \sigma contour now overlaps with the line separating acceleration from deceleration. You say that does not surprise you … but the 2011 Nobel Prize was awarded “… for the discovery of the accelerating expansion of the universe through observations of distant supernovae”. All we are saying is that is *not true* even with the latest much expanded dataset, if we accept the 5 \sigma standard for a discovery of fundamental importance (like the Higgs boson or gravitational waves).

To make our calculation analytic we use a Gaussian model for the spread of the (SALT 2) light-curve template parameters x_1 (stretch) and c (colour). As seen in our Fig.1 this is not a bad first approximation. Of course there may well be Malmqvist bias and other selection effects in the data. JLA did attempt to correct for this in their calibration of the decay rate versus luminosity and we used their ‘bias corrected’ m_B values. So Ned Wright is being disingenuous when he asserts that our analysis is “based on a false assumption about the data” because we have not taken into account the redshift dependence of the x_1 and c distributions. In fact we have done *no more and no less* than JLA and anyone else earlier (UNION/2/2.1, PanSTARRS etc).

Brian Schmidt above cites a blog post by David Rubin who is involved in the recently proposed ‘UNITY’ approach (https://arxiv.org/abs/1507.01602) to account for such effects. This is most welcome – however Rubin et al have *not* shown that doing so undermines our conclusion. In fact they did not mention our paper (which had appeared earlier). It would indeed be remarkable if making such post facto corrections yielded the old result obtained using an incorrect statistic – unless of course one aims to somehow get the “correct answer” (to quote one of our adversial referees)! Yet on his blog Rubin claims: “using the basic model of Nielsen et al., but modeling each category of SN discovery (nearby, SDSS, SNLS, HST) with its own population mean … The chi^2 between, e.g., Milne and LambdaCDM increases from 11.61 to 16.84”. If he is really using our code (rather than the flawed ‘constrained \chi^2’) he ought to check with us that he is doing it right and/or publish a paper with details so others can judge … rather than make unsubstantiated negative remarks about our work on social media. It is called (scientific) etiquette.

Well I would not normally try to defend a serious scientific result on social media either – however since professional cosmologists are weighing in here I hope you will grant us the right of reply (everything I have said above is on behalf of my coauthors Jeppe Nielsen and Alberto Guffanti as well). Allow me to also make a remark in response to other comments on your post. It astonishes me that so many cosmologists do not realise how much of what they believe to be true depends on the *assumed* model. Only if we assume perfect homogeneity and pressureless ‘dust’ (as was done in the 1930s … before any data!) do we get the simple sum rule: 1 = \Omega_m + \Omega_k + \Omega_\Lambda. The claim that ‘standard rulers’ like the CMB acoustic peak and BAO uniquely fix these parameters by provide complementary constraints to supernovae is based on this sum rule being exactly true. And one then infers that \Lambda is of order H_0^2 where H_0 is 10^{-42} GeV in energy units. This is then interpreted as a vacuum energy density of 8\pi G \Lambda ~ (10^{-12} GeV)4 – which is 10^{60} times smaller than the natural expectation in the Standard Model of particle physics which has worked perfectly up to ~10^3 GeV. Clearly something is *very wrong* with our understanding. We ought to stop and ask if perhaps enlarging the cosmological model (to accommodate inhomogeneities, non-‘dust’ matter etc) might mitigate this conclusion, rather than pursue ‘precision cosmology’ with a model that may be past its sell-by date. Of course the cosmological constant problem will still be with us – but at least we will not be doubling it with a ‘why now?’ problem. I appreciate many people think the concordance cosmology is ‘simple’ – but in fact there is *no physical understanding* at all of why \Lambda ~ H_0^2! Can we please look at alternatives instead of deluding ourselves that we have solved cosmology?! We are at the very start, not the end.

Sorry for the diatribe – Subir

• “All we are saying is that is *not true* even with the latest much expanded dataset, if we accept the 5 \sigma standard for a discovery of fundamental importance (like the Higgs boson or gravitational waves).”

As i mentioned above with my rather cheesy example, it is really strange to say that acceleration has not been detected even though the overwhelming portion of the likelihood contour—despite the small corner of parameter space which remains for postive q—has negative q. This is really a different situation from detecting particles or gravitational waves, where the null hypothesis is clear: there is nothing there. Treating a non-accelerating universe in the same way is an assumption, and a bias—against the cosmological constant; it’s obvious from your remarks that you “don’t like it”. Yes, one needs to be careful about “trying too hard to get the right answer”, but you also need to realize that your right answer might be different from someone else’s right answer. 🙂 Of course, 5 sigma is completely arbitrary. Would you concede acceleration if non-acceleration could be ruled out at 6 sigma? 5.1 sigma? 5.0000000000000000001 sigma?

• “Only if we assume perfect homogeneity and pressureless ‘dust’ (as was done in the 1930s … before any data!) do we get the simple sum rule: 1 = \Omega_m + \Omega_k + \Omega_\Lambda. The claim that ‘standard rulers’ like the CMB acoustic peak and BAO uniquely fix these parameters by provide complementary constraints to supernovae is based on this sum rule being exactly true.”

True, but we know that the universe is close to being homogeneous in some sense. You need to show that the conclusions mentioned do not follow from such relaxed assumptions as well. Everyone knows that the universe is not completely homogeneous. This doesn’t mean that no conclusions of observational cosmology are true. It doesn’t even mean that they need to be substantially modified.

• “And one then infers that \Lambda is of order H_0^2 where H_0 is 10^{-42} GeV in energy units. This is then interpreted as a vacuum energy density of 8\pi G \Lambda ~ (10^{-12} GeV)4 – which is 10^{60} times smaller than the natural expectation in the Standard Model of particle physics which has worked perfectly up to ~10^3 GeV. Clearly something is *very wrong* with our understanding.”

Many assumptions here. Do you think it is strange that lambda is of the same order as the Hubble constant? If so, read Lake’s paper I linked to above. The assumption that the cosmological constant is “vacuum energy” is a huge assumption. Who says that it is? Again, I can’t say it often enough, please check out Bianchi and Rovelli.

Many people, most of them much smarter than I am, have worked on large-scale inhomogeneities, back-reaction and so on. Good. Maybe they’ll be able to explain the data in a non-contrived way. Are you worried about coincidences? Wouldn’t it be a huge coincidence if such large-scale inomogeneity just happened to look the same as a three-parameter model (none of which was designed to fit anything) in 1920s cosmology? (Similarly, even if CDM can explain all the MOND phenomenology, it is strange that CDM “just happens”, after complex “baryonic physics”, gastrophysics, and so on, to come out looking like MOND. (It doesn’t, though, at least not yet. From time to time people publish claims like this (and here confirmation and publication bias have to be factored in). Sometimes there are rebuttals involving basic stuff, and the rebuttal isn’t even acknowledged. That’s not good.

• An HTML typo made the text linking to Bianchi and Rovelli much larger, but maybe that is a good thing!

• “This is then interpreted as a vacuum energy density of 8\pi G \Lambda ~ (10^{-12} GeV)4 – which is 10^{60} times smaller than the natural expectation in the Standard Model of particle physics which has worked perfectly up to ~10^3 GeV. Clearly something is *very wrong* with our understanding.”

Hhmmm….Yes, something is wrong. What could it be? Let’s see, particle physics comes up with something which is wrong by several dozen orders of magnitude—and the blame goes to astronomers? What sort of logic is that? There is obviously something wrong with the estimate. The claim is that some unknown symmetry principle (invoked like a rabbit out of a hat, but of course we don’t like dark matter and dark energy because we don’t know what they are; double standard here?) or cancellation mechanism can make the contribution exactly zero. I’ll believe when I see it. This hand-waving has been going on for decades, and still no-one has such a cancellation mechanism. So we are supposed to believe that the same people who are off by dozens of orders of magnitude are correct when they tout an unknown cancellation mechanism, and after decades still haven’t come up with the goods?

Steven Weinberg (arguably the greatest living physicist) came up with an explanation for the observed value of the cosmological constant which still allows the vacuum energy and even invoked the multiverse even before Tegmark was driving a Saab 900. Why is this even considered to still be a problem?

• Let me quote something which one astronomer actually said about another: “XXX is a wonderful person, but he uses frames in his web pages”. So, to see Max in turbo mode, search for “turbo” on the photos page at the link above.

• By the way, this wasn’t said about Max. I won’t say who. 🙂

• For our response to Ned’s critique of our method (which has also been alluded to by Riess & Scolnic, and amplified on by Rubin & Hayden) please see: https://4gravitons.wordpress.com/2016/11/11/a-response-from-nielsen-guffanti-and-sarkar/#comments

I have just seen the previous posts by John Peacock and can only agree that cosmology is now at the stage when unknown unknowns are what matter. I would argue further that this has been the case all along – simply because astronomical observations are not the same as controllable laboratory measurements! Both are equally necessary however to make progress in our understanding – so let us aspire to the same level of rigour and intellectual honesty in both arenas.

9. I think that there is a more basic problem in the paper and/or in the hype, namely, what is the null hypothesis? When the LHC has perhaps detected a particle, the null hypothesis is clear: there is no particle. Thus, it seems a good idea to demand a 3-sigma, or even 5-sigma, detection. On the other hand, suppose I give 1000 people a piece of the same cheese and 997 say that it is gouda and 3 say that it is cheddar. Would it be correct to say that I have only marginal evidence that it is gouda?

The most objective hypothesis is that the cosmological parameters are what we measure them to be. If one wants to impose priors, one has to keep in mind a) that different parameterizations can lead to different results and b) that in general the cosmological parameters change with time. Take the example of a flat universe: Omega+lambda=1. On the face of it, this seems like a rather unlikely coincidence. But one can also think about it in terms of the (comoving) radius of curvature of the universe: pick a random number between 0 and infinity and it is likely to be large.

A way to make this more objective is to parameterize various cosmological models by invariant quantities. If one thinks of the universe as a dynamical system, with the expansion of the universe corresponding to trajectories in the lambda-Omega diagram, the it turns out that there is a combination of lambda and Omega which is a constant of motion. This approach is explained in an interesting paper by Kayll Lake (which also explains the classical flatness problem for a large portion of the parameter space, which our universe belongs to).

• To be clear, Lake’s approach perhaps cannot explain why the sum lambda+Omega is 1 to a very high accuracy (if, indeed, it is; this should be measured even if people believe it because of inflation), but can definitely explain the flatness problem in its original formulation, namely the question why Omega is of order 1 (which is a different statement but widely believed to be a puzzle in classical cosmology). I have been arguing for a long time that the classical flatness problem is essentially the result of arguing too far from analogy. Lest one think that this couldn’t happen, something similar involves Hawking radiation. Many people argue from the simple picture of a particle-antiparticle pair, one of which falls to the singularity and one of which escapes, an analogy propagated by Hawking himself, but it is wrong. If one takes it seriously, one is led to wrong conclusions. Orders-of-magnitude wrong. This has happened to respectable people.

Back when QSOs were thought to be more numerous around a redshift of 2 or whatever, some people proposed a cosmological model similar to the current standard model, but with a long quasi-static phase. Dicke pointed out, correctly, that this would require fine-tuning of lambda and Omega. The irony is that it is precisely the same fine tuning which is required to have Omega significantly different from 1 (at least if the cosmological constant is positive, as in our universe), but later the formulation of the flatness problem by Dicke and Peebles argued the opposite, namely that fine-tuning is required to have Omega of order 1. As Lake points out, this is true if the cosmological constant is zero (but as I show in the MNRAS paper linked to above, this is qualitatively true but not quantitatively, in that a typical observer would not observer huge values of Omega), but is not true in general. So, interestingly, in general fine-tuning is required to have a significantly non-flat universe, not vice-versa.

• Phillip Helbig Says:

I’ve somehow lost the reference to Dicke and the coasting universe. Does anyone have an idea where I can find it? I’m sure that I read it online somewhere, IIRC in a popular or semi-popular article (or perhaps a book), as there were pictures of scientists in it.

10. Toffeenose Says:

Omega equal one because it is a vacuum solution to GR.

• Do I detect a strong positive correlation between the seriousness of blog comments and the names under which they are posted?

11. Suppose this re-analysis had found some slightly shifted contours, but otherwise no major differences. In particular, non-acceleration ruled out at more than 5 sigma. Would the authors have written the paper? If so, would it have been accepted? If so, would we be discussing it? This is another form of bias.

12. I’m curious about the use of a profile likelihood, rather than marginalising over the other parameters.

• So for each point in the lambda-Omega plane, they take the point in the higher-dimensional space with the maximum likelihood, rather than averaging (marginalizing) like most people do. Another approach is to fix the nuisance parameters at some fiducial values. Another is to construct higher-dimensional contours and project them onto the two-dimensional space. (Charley Lineweaver used to do this with the CMB.) Between fixing the parameters and doing something else with them, one can choose a fiducial value but rather than a delta function some representation of the uncertainty, guassian or otherwise.

Although other parameters were involved, and none of them nuisance, I did investigate how two-dimensional contours depend on which procedure one adopts. For the stuff I looked at (supernova cosmology, but investigating the influence of an additional inhomogeneity parameter), it doesn’t make a huge difference. Maximization and marginalization were indistinguishable at even a long glance in some of the plots.

It’s not that hard carry out all of these approaches. If there is a significant difference, one can show more than one plot. If not, one can at least mention that there is no significant difference.

It would be really interesting, of course, if marginalization has the 3-sigma contour completely within the accelerating part of the diagram.

• I couldn’t put all the plots in the MNRAS paper, but they are all in a talk I’ve given in various versions at various places (including Sussex when Peter couldn’t attend due to Head-of-School business) over the past couple of years. (By chance, just yesterday I got the proceedings from the Moriond cosmology meeting from last March, where a very short version appears. Surprisingly, two of the plots there are rotated by 90 degrees!)

• Dear Alan, that is because ours is a *frequentist* procedure. We have no priors – we use Gaussians to model the distribution of the actual light curve parameters x_1 and c (provided by JLA).

The Bayesian equivalent of our method has been presented by your IC colleagues who have *confirmed* our results (Shariff et al., “BAHAMAS: new SNIa analysis reveals inconsistencies with standard cosmology”, https://arxiv.org/abs/1510.05954).

The claim in today’s hatchet job on our work (https://arxiv.org/abs/1610.08972) is that ours is a ‘Bayesian Hierarchical Model’. I hope Roberto Trotta is not offended because it was he and collaborators who presented this model! Rubin & Hayden claim they “demonstrate errors in (our) analysis” but in fact they recover our answer more-or-less (3.1 sigma evidence for acceleration with the JLA data). By making further corrections to this dataset they claim the significance goes up to 4.2 sigma. That would indeed be progress if true (it’d be good if they made their code public as we have). But it is still short of 5 sigma (and begs the question why has the significance increased so little – recall that Riess et al (astro-ph/9805201) claimed 3.9 sigma with just 16+34 supernovae. Let us please be honest that previous analyses of the data were not done correctly and not try to justify the old result. Then we can all make progress in the field.

I am happy to address any concerns that cosmologists may have. I will not however respond to ill-informed, self-promoting ramblings from those who clearly have nothing better to do!

• Dear Alan, that’s simply because our analysis is frequentist. The Bayesian equivalent (Bayesian Hierarchical Model) has been presented by your colleague Roberto Trotta (Shariff et al, “BAHAMAS: new SNIa analysis reveals inconsistencies with
standard cosmology”, https://arxiv.org/abs/1510.05954). They *confirm* our results using the same JLA dataset.

Strangely enough, Rubin & Hayden (https://arxiv.org/abs/1610.08972) credit us with Roberto’s approach, asserting that we are using the Bayesian Hierarchical Model. They too confirm our findings more-or-less (3.1 sigma for acceleration). They then manage to improve this to 4.2 sigma by making post facto corrections to the data – thus miraculously recovering the result obtained earlier by JLA with the flawed ‘constrained \chi^2 ‘method! Actually JLA had already made corrections for such selection effects (Malmqvist bias) in the data. Even if we accept that this is not double counting, the significance is still not 5 sigma … and begs the question why there has been so little improvement since 1998 when Riess et al (https://arxiv.org/abs/astro-ph/9805201) claimed 3.9 sigma with just 50 supernovae?

Riess too claims (https://blogs.scientificamerican.com/guest-blog/no-astronomers-haven-t-decided-dark-energy-is-nonexistent/) that we “assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the
same, even though they have been shown to be different and past analyses have accounted for these differences” referring to the JLA paper (of which he was a coauthor). But in fact we are using exactly the same dataset (http://supernovae.in2p3.fr/sdss_snls_jla/ReadMe.html) which incorporates such corrections already!

My collaborators Jeppe & Alberto and I are happy to answer any questions from professional cosmologists such as yourself. We will not however respond to ill-informed and self-promoting ramblings by people who clearly have nothing better to do!

Subir

PS: Amusingly Rubin & Hayden also assert that there is “~ 75 sigma evidence for positive Omega_Lambda”. I hope you’ll agree that If *any* hypothesis can be established at such significance then that would be perhaps an even more remarkable claim!

• Maybe I am (one of) the self-promoting rambler(s), but in case anyone else is interested: the 75-sigma claim is not just for the SN data alone, but including CMB, BAO, and so on. One needs to make the distinction clear: Do the CMB data alone say that the universe is accelerating (yes or no, depending on whose analysis one believes and what arbitrary threshold is employed, though I think that the “meta-question” of the correct null hypothesis is not clear to everyone). Does a combination of several cosmological tests say that the universe is accelerating (yes, and probably even just the CMB can say this). Of course, all answers depend on higher-level assumptions such as GR is correct, we are not in some huge void which just happens to mimic 1920s cosmology, the Devil is not trying to fool us, and so on.

Another approach is not to discard priors. If a case can be made for a flat universe, then the signal for acceleration increases. Peter has frequently 🙂 pointed out the advantages of Bayesian reasoning on this blog.

So, enough for Sunday evening; I’ve got better things to do. 😐

13. In a criminal court, a conviction means that judge and/or jury (depending on the system) need to be convinced of guilt beyond reasonable doubt (5 sigma, say). In a civil court, the criterion is balance of evidence. Which is appropriate here? Looking at the figure above, it seems silly to say that there is no convincing evidence of acceleration. In other words, I think that the appropriate metaphor is a civil court, not a criminal court.

Of course, this depends on one’s prejudice. If one thinks that the cosmological constant is something so strange that we have to be really, really, convinced, OK, the argument that we are not there yet makes sense. If one thinks that lambda, like Omega, is a free parameter, or Lake’s alpha (which in this case leads to the same conclusion), then whether the universe is accelerating or not depends on what we measure, and the balance of evidence, even just the supernova stuff, favours acceleration. Yes, maybe not 5-sigma sure, but turn the question around: are you convinced from the supernova data that the universe is not accelerating?

14. Moncy Vilavinal John Says:

Dear All, I would like to call your attention to a related new paper that appeared today in the arXiv https://arxiv.org/abs/1610.09885

“Has dark energy had its day?”

16. Moncy Vilavinal John Says:

A common misconception is that no acceleration’ means Milne-type empty universe. This is not correct. A nonempty flat universe, with both matter (66.6%) and dark energy (33.3%) can also expand with no acceleration. See a recent paper
[1610.09885] Realistic coasting cosmology from the Milne model .

Use of Bayesian theory in comparing cosmological models is now widely discussed. It may be interesting to know that such a work was done way back in 2001 itself and published in a Physical Review D paper. The result was the same as the recent claim: that the supernova data do not provide strong evidence in favour of an accelerating universe, when compared to a non-accelerating coasting model’ while using Bayesian theory.
http://journals.aps.org/prd/abstract/10.1103/PhysRevD.65.043506

This result was confirmed again in The Astrophysical Journal (ApJ) in 2005.
http://iopscience.iop.org/article/10.1086/432111/meta

The abstract of this paper tells it all:

In this paper, using a significantly improved version of the model-independent, cosmographic approach to cosmology, we address an important question: was there a decelerating past for the universe? To answer this, Bayes’s probability theory is employed, which is the most appropriate tool for quantifying our knowledge when it changes through the acquisition of new data. The cosmographic approach helps to sort out the models in which the universe was always accelerating from those in which it decelerated for at least some time in the period of interest. The Bayesian model comparison technique is used to discriminate these rival hypotheses with the aid of recent releases of supernova data. We also attempt to provide and improve another example of Bayesian model comparison, performed between some Friedmann models, using the same data. Our conclusion, which is consistent with other approaches, is that the apparent magnitude-redshift data alone cannot discriminate these competing hypotheses. We also argue that the lessons learned using Bayesian theory are extremely valuable to avoid frequent U-turns in cosmology.

• telescoper Says:

This model (“The coasting model) has been discussed for over 50 years, not just since 2001. The main problem with it is that it just doesn’t fit the data.

It’s also the case that the coasting phase of the model is transient. We have to live at the precise point when the acceleration is zero.

• telescoper Says:

This model (“The coasting model) has been discussed for over 50 years, not just since 2001. The main problem with it is that it just doesn’t fit the totality of the data.

It’s also the case that the coasting phase of the model is transient. We have to live at the precise point when the acceleration is zero.

• Moncy Vilavinal John Says:

Thank you for the comment. About the Milne model, it is correct – it is discussed for more than half a century. I was mentioning this misconception that a constant expansion rate is possible always only in a Milne model.
About the second part of the comment – I wonder why you believe that we must have such a synchronicity problem? Avelino and Kirshner just says that there is synchronicity problem in the Lambda-CDM model.

• “About the second part of the comment – I wonder why you believe that we must have such a synchronicity problem? Avelino and Kirshner just says that there is synchronicity problem in the Lambda-CDM model.”

First, a model has to fit all the data, or at least all correct data, not just the supernova data. Second, one can ask which synchronicity problem is more severe. Third, if you think there is a synchronicity problem, read Bianchi and Rovelli. Also, explain why the coincident angular sizes of the Sun and Moon—a much more spectacular coincidence, which also holds only for a relatively short time around the present—does not need an explanation.

• Moncy Vilavinal John Says:

But I hope you will agree that it is by trying to solve such coincidence problems, rather than simply living with it, that science has grown.

• One can solve problems only if they really are problems.