Archive for Riess et al.

Who’s worried about the Hubble Constant?

Posted in The Universe and Stuff with tags , , , , on January 11, 2018 by telescoper

One of the topics that is bubbling away on the back burner of cosmology is the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter.

Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:

Planck_parameters

The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68  km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.

About 18 months I blogged about a “direct” determination of the Hubble constant by Riess et al.  using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc, hinting at a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. A news item on the BBC hot off the press reports that a more recent analysis by the same group is stubbornly sitting around the same value of the Hubble constant, with a slight smaller error so that the discrepancy is now about 3.4σ. On the other hand, the history of this type of study provides grounds for caution because the systematic errors have often turned out to be much larger and more uncertain than the statistical errors…

Nevertheless, I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters.

If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.

Anyway, since polls seem to be quite popular these days, so let me resurrect this old one and see if opinions have changed!

 

Advertisements

Should we worry about the Hubble Constant?

Posted in The Universe and Stuff with tags , , , , on July 27, 2016 by telescoper

One of the topics that came up in the discussion sessions at the meeting I was at over the weekend was the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter. Coincidentally, I found this old preprint while tidying up my office yesterday:

Cosmo_params

Things have changed quite a bit since 1979! Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:

Planck_parameters

The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68  km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.

Note however that a recent “direct” determination of the Hubble constant by Riess et al.  using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc. Had these two values been obtained in 1979 we wouldn’t have worried because the errors would have been much larger, but nowadays the measurements are much more precise and there does seem to be a hint of a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. On the other hand the history of Hubble constant determinations is one of results being quoted with very small “internal” errors that turned out to be much smaller than systematic uncertainties.

I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters. If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.

It’s quite interesting to think  how much we scientists tend to carry on despite the signs that things might be wrong. Take, for example, Newton’s Gravitational Constant, G. Measurements of this parameter are extremely difficult to do, but different experiments do seem to be in disagreement with each other. If Newtonian gravity turned out to be wrong that would indeed be extremely exciting, but I think it’s a wiser bet that there are uncontrolled experimental systematics. On the other hand there is a danger that we might ignore evidence that there’s something fundamentally wrong with our theory. It’s sometimes a difficult judgment how seriously to take experimental results.

Anyway, I don’t know what cosmologists think in general about this so there’s an excuse for a poll: