Archive for Hubble constant

More Cosmic Tension?

Posted in The Universe and Stuff with tags , , , , , , , , , on November 12, 2019 by telescoper

Quite a lot of fuss was being made in cosmological circles while I was away last week concerning a paper that had just been published in Nature Astronomy by Eleonora Di Valentino, Alessandro Melchiorri and Joe Silk that claims evidence from the Planck Cosmic Microwave background and other data that the Universe might be closed (or at least have positive spatial curvature) in contrast to the standard cosmological model in which the spatial geometry is Euclidean. Nature Astronomy is behind a paywall but the paper is available for free on the arXiv here. The abstract reads:

The recent Planck Legacy 2018 release has confirmed the presence of an enhanced lensing amplitude in CMB power spectra compared to that predicted in the standard ΛCDM model. A closed universe can provide a physical explanation for this effect, with the Planck CMB spectra now preferring a positive curvature at more than 99% C.L. Here we further investigate the evidence for a closed universe from Planck, showing that positive curvature naturally explains the anomalous lensing amplitude and demonstrating that it also removes a well-known tension within the Planck data set concerning the values of cosmological parameters derived at different angular scales. We show that since the Planck power spectra prefer a closed universe, discordances higher than generally estimated arise for most of the local cosmological observables, including BAO. The assumption of a flat universe could, therefore, mask a cosmological crisis where disparate observed properties of the Universe appear to be mutually inconsistent. Future measurements are needed to clarify whether the observed discordances are due to undetected systematics, or to new physics, or simply are a statistical fluctuation.

I think the important point to take from this study is that estimates of cosmological parameters obtained from Planck are relatively indirect, in that they involve the simultaneous determination of several parameters some of which are almost degenerate. For example, the `anomalous’ lensing amplitude discussed in this paper is degenerate with the curvature so that changing one could mimic the effect on observables of changing the other; see Figure 2 in the paper.

It’s worth mentioning another (and, in my opinion, better argued) paper on a similar topic by Will Handley of Cambridge which is on the arXiv here. The abstract of this one reads:

The curvature parameter tension between Planck 2018, cosmic microwave background lensing, and baryon acoustic oscillation data is measured using the suspiciousness statistic to be 2.5 to 3σ. Conclusions regarding the spatial curvature of the universe which stem from the combination of these data should therefore be viewed with suspicion. Without CMB lensing or BAO, Planck 2018 has a moderate preference for closed universes, with Bayesian betting odds of over 50:1 against a flat universe, and over 2000:1 against an open universe.

Figure 1 makes a rather neat point that the combination of Planck and Baryon Acoustic Oscillations does not separately give consistent values for the Hubble constant and the curvature and neither does the combination of Planck and direct Hubble constant estimates:

I don’t know what the resolution of these tensions is, but I think it is a bit dangerous to dismiss them simply as statistical flukes. They might be that, of course, but they also might not be. By shrugging one’s shoulders and ignoring such indications one might miss something very fundamental. On the other hand, in my opinion, there is nothing here that definitely points the finger at spatial curvature either: it is possible that there is something else missing from the standard model that, if included, would resolve these tensions. But what is the missing link?

Answers on a postcard, or through the comments box.

Gravitational Lensing, Cosmological Distances and the Hubble Constant

Posted in The Universe and Stuff with tags , , , on October 17, 2019 by telescoper

To continue the ongoing Hubble constant theme, there is an interesting paper on the arXiv by Shajib et al. about determining a distance to a gravitational lens system; I grabbed the above pretty picture from the paper.

The abstract is:

 

You can click on this to make it bigger. You will see that this approach gives a `high’ value of H0 ≈ 74.2, consistent with local stellar distances measures, rather than with the `cosmological’ value which comes in around H0 ≈ 67 or so. It’s also consistent with the value derived from other gravitational lens studies discussed here.

Here’s my ongoing poll on the Hubble constant, with

 

 

More Hubble Constant Tension

Posted in The Universe and Stuff with tags , , , on October 14, 2019 by telescoper

Here’s the abstract of another contribution to ongoing discussions around so-called tension between different estimates of the Hubble Constant (see this blog passim):

You can find the actual paper (by Lin, Mack and Hou) on the arXiv here.

Now, before Mr Hine starts to fill up my blocked comments folder with rants, I will add a few comments of my own.

First, at the Royal Astronomical Society on Friday I discussed all this with a renowned observational astronomer and expert on stellar distance measurements. He agreed with me that if the `tension’ is indeed real then it is far more likely to be a problem with stellar distance measurements than the cosmology.

Second, I am writing a review of all this to be published in Astronomy & Geophysics next year. Watch this space.

Third, this gives me an excuse to include yet again my poll on whether you are worried about the “tension”:

Hubble Tension: an “Alternative” View?

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on July 25, 2019 by telescoper

There was a new paper last week on the arXiv by Sunny Vagnozzi about the Hubble constant controversy (see this blog passim). I was going to refrain from commenting but I see that one of the bloggers I follow has posted about it so I guess a brief item would not be out of order.

Here is the abstract of the Vagnozzi paper:

I posted this picture last week which is relevant to the discussion:

The point is that if you allow the equation of state parameter w to vary from the value of w=-1 that it has in the standard cosmology then you get a better fit. However, it is one of the features of Bayesian inference that if you introduce a new free parameter then you have to assign a prior probability over the space of values that parameter could hold. That prior penalty is carried through to the posterior probability. Unless the new model fits observational data significantly better than the old one, this prior penalty will lead to the new model being disfavoured. This is the Bayesian statement of Ockham’s Razor.

The Vagnozzi paper represents a statement of this in the context of the Hubble tension. If a new floating parameter w is introduced the data prefer a value less than -1 (as demonstrated in the figure) but on posterior probability grounds the resulting model is less probable than the standard cosmology for the reason stated above. Vagnozzi then argues that if a new fixed value of, say, w = -1.3 is introduced then the resulting model is not penalized by having to spread the prior probability out over a range of values but puts all its prior eggs in one basket labelled w = -1.3.

This is of course true. The problem is that the value of w = -1.3 does not derive from any ab initio principle of physics but by a posteriori of the inference described above. It’s no surprise that you can get a better answer if you know what outcome you want. I find that I am very good at forecasting the football results if I make my predictions after watching Final Score

Indeed, many cosmologists think any value of w < -1 should be ruled out ab initio because they don’t make physical sense anyway.

 

 

 

The Last Resting Place of the Hubble Parameter?

Posted in Uncategorized with tags , , , on July 22, 2019 by telescoper

Last week was rather busy on the blog, with a run of posts about the Hubble constant (or, more precisely, the  present value of the Hubble parameter) attracting the most traffic. Somehow during all the excitement I allowed myself to be persuaded to write a piece for RTÉ Brainstorm about this issue. My brief is to write a detailed account of the current controversy in language accessible to a lay reader in not more than 800 words. That’s quite a challenge. Better get on with it.

Perhaps after that I’ll be able to lay the Hubble parameter to rest, at least for a while:

The original photograph (and joke) may be found here.

Thoughts on Cosmological Distances

Posted in The Universe and Stuff with tags , , , , , on July 18, 2019 by telescoper

At the risk of giving the impression that I’m obsessed with the issue of the Hubble constant, I thought I’d do a quick post about something vaguely related to that which I happened to be thinking about the other night.

It has been remarked that the two allegedly discrepant sets of measures of the cosmological distance scale seen, for example, in the diagram below differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types).

The above Figure is taken from the paper I blogged about a few days ago here.

That is basically true. There is, however, another difference in the two types of determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.

Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.

With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?

Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.

I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:

The result  DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?

Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity).

In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.

In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.

Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.

Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form.

Addendum: just to clarify one point, the reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.

The Hubble Constant from the Tip of the Red Giant Branch

Posted in The Universe and Stuff with tags , , , , on July 16, 2019 by telescoper

At the risk of boring everyone again with Hubble constant news there’s yet another paper on the arXiv about the Hubble constant. This one is another `local’ measurement, in that it uses properties of nearby stars,  time based on a new calibration of the Red Giant Branch. This one is by Wendy Freedman et al. and its abstract reads:

We present a new and independent determination of the local value of the Hubble constant based on a calibration of the Tip of the Red Giant Branch (TRGB) applied to Type Ia supernovae (SNeIa). We find a value of Ho = 69.8 +/- 0.8 (+/-1.1\% stat) +/- 1.7 (+/-2.4\% sys) km/sec/Mpc. The TRGB method is both precise and accurate, and is parallel to, but independent of the Cepheid distance scale. Our value sits midway in the range defined by the current Hubble tension. It agrees at the 1.2-sigma level with that of the Planck 2018 estimate, and at the 1.7-sigma level with the SHoES measurement of Ho based on the Cepheid distance scale. The TRGB distances have been measured using deep Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) imaging of galaxy halos. The zero point of the TRGB calibration is set with a distance modulus to the Large Magellanic Cloud of 18.477 +/- 0.004 (stat) +/-0.020 (sys) mag, based on measurement of 20 late-type detached eclipsing binary (DEB) stars, combined with an HST parallax calibration of a 3.6 micron Cepheid Leavitt law based on Spitzer observations. We anchor the TRGB distances to galaxies that extend our measurement into the Hubble flow using the recently completed Carnegie Supernova Project I sample containing about 100 well-observed SNeIa. There are several advantages of halo TRGB distance measurements relative to Cepheid variables: these include low halo reddening, minimal effects of crowding or blending of the photometry, only a shallow (calibrated) sensitivity to metallicity in the I-band, and no need for multiple epochs of observations or concerns of different slopes with period. In addition, the host masses of our TRGB host-galaxy sample are higher on average than the Cepheid sample, better matching the range of host-galaxy masses in the CSP distant sample, and reducing potential systematic effects in the SNeIa measurements.

You can download a PDF of the paper here.

Note that the value obtained ising the TRGB here lies in between the two determinations using the cosmic microwave background and the Cepheid distance scale I discussed, for example, here. This is illustrated nicely by the following couple of Figures:

I know that this result – around 70 km s-1 Mpc-1 – has made some people a bit more relaxed about the apparent tension between the previous measurements, but what do you think? Here’s a poll so you can express your opinion.

My own opinion is that if there isn’t any tension at all at the one-sigma level then you should consider the possibility that you got sigma wrong!