Hubble’s Constant – The Tension Mounts!

There’s a new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. The abstract is here:

You can download a PDF of the full paper here.

You will that these measurements, based on observations of time delays in multiply imaged quasars that have been  gravitationally lensed, give higher values of the Hubble constant than determinations from, e.g., the Planck experiment.

Here’s a nice summary of the tension in pictorial form:

And here are some nice pictures of the lensed quasars involved in the latest paper:

 

It’s interesting that these determinations seem more consistent with local distance-scale approaches than with global cosmological measurements but the possibility remains of some unknown systematic.

Time, methinks, to resurrect my long-running poll on this!

Please feel free to vote. At the risk of inciting Mr Hine to clog up my filter with further gibberish,  you may also comment through the box below.

 

13 Responses to “Hubble’s Constant – The Tension Mounts!”

  1. telescoper Says:

    I shall be working.

  2. George Jones Says:

    Cosmic coincidence? Minutes after reading this blog post, a bot sent me something relevant. LIGO and standard sirens might have something to say about this.

    https://physicsworld.com/a/merging-neutron-stars-could-resolve-hubble-constant-crisis-sooner-than-previously-thought/

    https://www.nature.com/articles/s41550-019-0820-1

    https://arxiv.org/abs/1802.03404

  3. drewancameron Says:

    What do you think about the proposition of Section 4 that comparing the posteriors on the cosmological parameters resulting from fits to each quasar system separately, then jointly, provides a test of whether or not there are systematic errors in any one of these datasets?

    The details give me the heebie-jeebies: pairwise the test are based on the Bayes factor comparing the joint fit versus the completely independent fit which is a set up intrinsically geared in favour of rejecting the independent fit; ditto, for the all-but-one checks; also the Bayes factors seem like they might be computed from the BIC (certainly it is used elsewhere, so we’re talking an order 1 approximation). If you had a suspicion that there might be systematics would failing to reject the null hypothesis in this way give you comfort? OR would you try to be Bayesian and specify some models for what the systematic errors might look like in your measurements?

    All that to say, I’m not dumping on the whole idea of calculating between-dataset or between-experiment Bayes factors as some measure of discrepancy (though certainly not an absolute or perfect one). But I think this is nuts as a test of systematics, which is really another way of saying model misspecification.

    PS. Do you know your colleagues in climate change (sea level history reconstruction)? They’re doing cool things with integrated Gaussian processes etc.

  4. […] week I posted about new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about […]

  5. […] using the cosmic microwave background and the Cepheid distance scale I discussed, for example, here. This is illustrated nicely by the following couple of […]

  6. […] The above Figure is taken from the paper I blogged about a few days ago here. […]

  7. Pierre Fleury Says:

    This paper actually has little to do with the H0 tension. What we did there was to re-analyze SN data using a lumpy “Swiss-cheese” model instead of a homogeneous FLRW model. The main difference is that light tends to be less focussed in the former model compared to the latter. Since this is a lensing-like effects, it does not really affect the low-z behaviour of the luminosity-redshift relation, and hence the measurement of H0 which would result from it. The effect is mostly visible on Ωm. As you can see on the figure, the horizontal position of the contours is not changed much when the model is changed.
    It must also be noted that this paper used a now obsolete data set: SNLS 3 as published in Riess et al 2011. These data apparently contained uncorrected systematics, which is the reason why the measured value of Ωm was so low. Now that these systematics have been accounted for, the lumpy model is not necessary to reach agreement with the CMB data.

    • Pierre Fleury Says:

      Small-scale inhomogeneities of the type which we considered in that 2013 paper (concentrated lumps within a vacuole) are not expected to affect the CMB power spectrum; at least not on the scales which are used to constrain the cosmological parameters. Hence, the CMB-measured Ωm remains untouched. However, the SN-measured Ωm can be strongly affected.
      The main message of the paper was the following. In 2013, there was a tension on Ωm between Planck and SNe; we showed that re-interpreting SN data using a lumpy model could resolve that tension. It turns out that it also alleviated the tension on H0, because of the specific degeneracy directions. Unfortunately, since the Ωm-tension was actually due to systematics in the SN data which have now be corrected, that paper and its conclusions are not relevant anymore.
      Regarding the last sentence of my previous comment — I was only talking about the Ωm tension, which is now solved. However, the H0 tension is undoubtedly still a major concern which, I think, cannot be addressed with small-scale inhomogeneities.

    • Pierre Fleury Says:

      Erratum: the SNLS data which I was talking about is actually Conley et al 2011 (https://arxiv.org/abs/1104.1443)

    • Pierre Fleury Says:

      Concerning the Conley et al. 2011 results, a summary of the issues is given in Sec. 6.4 of Betoule et al. 2014 (https://arxiv.org/abs/1401.4064). The main cause of the change in the measured value of Ωm (0.23 -> 0.3) was a re-calibration of the MegaCam zero-points in the g band.

      The question “what went wrong in the past?” is indeed fascinating, but not particularly rewarding. I suspect that this is why such studies are rare.

    • Pierre Fleury Says:

      In fact, a Swiss-cheese-like d(z) relation is still in good agreement with the data, see e.g. https://arxiv.org/abs/1710.02374. In that paper, Dhawan et al. used the Kantowski-Dyer-Roeder approximation, which is good effective model for a Swiss-cheese Universe (see https://arxiv.org/abs/1402.3123).
      They find that f=0 (eta=0 in their article) is still compatible with the JLA sample.

  8. […] You can click on this to make it bigger. You will see that this approach gives a `high’ value of H0 ≈ 74.2, consistent with local stellar distances measures, rather than with the `cosmological’ value which comes in around H0 ≈ 67 or so. It’s also consistent with the value derived from other gravitational lens studies discussed here. […]

  9. […] scale (seen, for example, in the diagram below  taken from the paper I blogged about a while ago here) differ in that the low values are global measures (based on observations at high redshift) while […]

Leave a comment