## Who’s worried about the Hubble Constant?

One of the topics that is bubbling away on the back burner of cosmology is the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H_{0}) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter.

Before getting to the point I should explain that Planck does not determine H_{0} directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H_{0}, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:

The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68 km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H_{0} = (67.8 +/- 0.9) km/s/Mpc.

About 18 months I blogged about a “direct” determination of the Hubble constant by Riess et al. using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc, hinting at a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. A news item on the BBC hot off the press reports that a more recent analysis by the same group is stubbornly sitting around the same value of the Hubble constant, with a slight smaller error so that the discrepancy is now about 3.4σ. On the other hand, the history of this type of study provides grounds for caution because the systematic errors have often turned out to be much larger and more uncertain than the statistical errors…

Nevertheless, I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters.

If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.

Anyway, since polls seem to be quite popular these days, so let me resurrect this old one and see if opinions have changed!

Follow @telescoper

January 11, 2018 at 4:24 pm

Is there any estimate of how many more Neutron star mergers with optical counterpart LIGO/VIRGO need to observe to determine H_0 with comparable accuracy? (And how long that might take with the current estimated rates.) That seems to be the easiest way to clarify the situation.

January 11, 2018 at 9:56 pm

From the nice Bayesian analysis of the LIGO team (https://arxiv.org/pdf/1710.05835.pdf) it looks like several hundred would be needed. The peculiar velocity modelling might also need to be improved once the precision gets to this level.

January 12, 2018 at 7:59 am

There are the “local” measurements, there is

Planck—and there isWMAP.Planckis the odd man out. Wouldn’t it make sense to understand the tension betweenPlanckandWMAPfirst, considering the similarity of the two? OK, perhaps theWMAPuncertainties are large enough so that there is no real tension, especially if one is not confident of the exact size of the error bars.However, another interesting thing is that even if there is no tension (

i.e.they are compatible within the errors) betweenPlanckand various other measurements, there is additional information in the fact thatallother measurements lie higher than Planck. There is a nice overview here:https://shsuyu.github.io/H0LiCOW/site/index.html

and kudos to the collaboration (including some of my former colleagues) for coming up with the most bizarre acronym in astronomy.

January 12, 2018 at 8:11 am

We should remember how far we have come. Many of us—and we are not

thatold—still remember when the Hubble constant was uncertain by a factor of two. At the Liège gravitational-lensing conference in 1993, there was a similar debate about the time delay between the two images of the quasar 0957+561, with one group (principally Bill Press) claiming a longer delay and the other (ours, who turned out to be right) claiming a shorter delay. The Hubble constant is inversely proportional to the time delay, so this was also a debate about a higher or lower Hubble constant (though over a larger range than the current “tension” debate). During one of the talks, Paul Schechter cried out from the audience “What’s the problem? Theyagreeat three sigma!” 🙂