One of the topics that is bubbling away on the back burner of cosmology is the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter.
Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:
The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68 km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.
About 18 months I blogged about a “direct” determination of the Hubble constant by Riess et al. using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc, hinting at a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. A news item on the BBC hot off the press reports that a more recent analysis by the same group is stubbornly sitting around the same value of the Hubble constant, with a slight smaller error so that the discrepancy is now about 3.4σ. On the other hand, the history of this type of study provides grounds for caution because the systematic errors have often turned out to be much larger and more uncertain than the statistical errors…
Nevertheless, I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters.
If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.
Anyway, since polls seem to be quite popular these days, so let me resurrect this old one and see if opinions have changed!
Follow @telescoper