Hubble’s Constant – The Tension Mounts!

There’s a new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. The abstract is here:

You can download a PDF of the full paper here.

You will that these measurements, based on observations of time delays in multiply imaged quasars that have been  gravitationally lensed, give higher values of the Hubble constant than determinations from, e.g., the Planck experiment.

Here’s a nice summary of the tension in pictorial form:

And here are some nice pictures of the lensed quasars involved in the latest paper:

 

It’s interesting that these determinations seem more consistent with local distance-scale approaches than with global cosmological measurements but the possibility remains of some unknown systematic.

Time, methinks, to resurrect my long-running poll on this!

Please feel free to vote. At the risk of inciting Mr Hine to clog up my filter with further gibberish,  you may also comment through the box below.

 

23 Responses to “Hubble’s Constant – The Tension Mounts!”

  1. We have to hand it to H0liCOW for coming up with the best acronym!

    • People talk about the tension, but has anyone shown that Fleury is wrong here?

    • Pierre Fleury Says:

      This paper actually has little to do with the H0 tension. What we did there was to re-analyze SN data using a lumpy “Swiss-cheese” model instead of a homogeneous FLRW model. The main difference is that light tends to be less focussed in the former model compared to the latter. Since this is a lensing-like effects, it does not really affect the low-z behaviour of the luminosity-redshift relation, and hence the measurement of H0 which would result from it. The effect is mostly visible on Ωm. As you can see on the figure, the horizontal position of the contours is not changed much when the model is changed.
      It must also be noted that this paper used a now obsolete data set: SNLS 3 as published in Riess et al 2011. These data apparently contained uncorrected systematics, which is the reason why the measured value of Ωm was so low. Now that these systematics have been accounted for, the lumpy model is not necessary to reach agreement with the CMB data.

      • Interesting.

        It has long been known that a Swiss-cheese or otherwise inhomogeneous universe on small scales doesn’t significantly affect the “direct” measurement of the Hubble constant itself. The important point was that the CMB stuff measures combinations of parameters where the Hubble constant is to some extent degenerate with others, so if assuming complete homogeneity when in fact the universe has significant small-scale inhomogeneity leads to a wrong Omega, it could also lead to a wrong Hubble constant. At least that is what I thought was the gist of the paper.

        I’m curious about your last sentence above: do you now see no tension between CMB measurements and “local” measurements which give a higher value?

      • Pierre Fleury Says:

        Small-scale inhomogeneities of the type which we considered in that 2013 paper (concentrated lumps within a vacuole) are not expected to affect the CMB power spectrum; at least not on the scales which are used to constrain the cosmological parameters. Hence, the CMB-measured Ωm remains untouched. However, the SN-measured Ωm can be strongly affected.
        The main message of the paper was the following. In 2013, there was a tension on Ωm between Planck and SNe; we showed that re-interpreting SN data using a lumpy model could resolve that tension. It turns out that it also alleviated the tension on H0, because of the specific degeneracy directions. Unfortunately, since the Ωm-tension was actually due to systematics in the SN data which have now be corrected, that paper and its conclusions are not relevant anymore.
        Regarding the last sentence of my previous comment — I was only talking about the Ωm tension, which is now solved. However, the H0 tension is undoubtedly still a major concern which, I think, cannot be addressed with small-scale inhomogeneities.

      • Pierre Fleury Says:

        Erratum: the SNLS data which I was talking about is actually Conley et al 2011 (https://arxiv.org/abs/1104.1443)

      • Thanks for clearing things up. Yes, the effect was in the SN data, not the CMB—bad memory on my part.

        Is there a reference for the problems with the Conley et al. data?

        So whatever the tension is, local inhomogeneities can’t resolve it; I think that there is a consensus there.

        Not that long ago, the question was 100 or 50, with ten-per-cent errors in both cases. Turns out that both camps were wrong. Has anyone done a systematic investigation of what, actually, went wrong?

        In the early days of gravitational-lensing statistics, there were claims that lambda was ruled out. That was wrong, as we now know. But why? I can’t find it now, but there was actually a paper investigating what went wrong (by some of the authors who had got it wrong).

      • Pierre Fleury Says:

        Concerning the Conley et al. 2011 results, a summary of the issues is given in Sec. 6.4 of Betoule et al. 2014 (https://arxiv.org/abs/1401.4064). The main cause of the change in the measured value of Ωm (0.23 -> 0.3) was a re-calibration of the MegaCam zero-points in the g band.

        The question “what went wrong in the past?” is indeed fascinating, but not particularly rewarding. I suspect that this is why such studies are rare.

      • I found the article about what went wrong in gravitational-lensing statistics. (In my own defence, I note that not everyone got it wrong. 🙂 )

      • The ADS page has links to the full text.

      • Phillip Helbig Says:

        “Unfortunately, since the Ωm-tension was actually due to systematics in the SN data which have now be corrected, that paper and its conclusions are not relevant anymore.”

        Could one turn it around and say that since there is now no tension, then the Swiss-cheese model for the mass distribution cannot be correct? So, not irrelevant, but a different conclusion?

      • Pierre Fleury Says:

        In fact, a Swiss-cheese-like d(z) relation is still in good agreement with the data, see e.g. https://arxiv.org/abs/1710.02374. In that paper, Dhawan et al. used the Kantowski-Dyer-Roeder approximation, which is good effective model for a Swiss-cheese Universe (see https://arxiv.org/abs/1402.3123).
        They find that f=0 (eta=0 in their article) is still compatible with the JLA sample.

  2. Why not go to a conference and debate it and possibly learn something about this? Alas, I already have plans to be lying on a Mediterranean beach during the conference.

    You can’t have everything. (As Lemmy said, where would you put it?)

  3. George Jones Says:

    Cosmic coincidence? Minutes after reading this blog post, a bot sent me something relevant. LIGO and standard sirens might have something to say about this.

    https://physicsworld.com/a/merging-neutron-stars-could-resolve-hubble-constant-crisis-sooner-than-previously-thought/

    https://www.nature.com/articles/s41550-019-0820-1

    https://arxiv.org/abs/1802.03404

  4. What do you think about the proposition of Section 4 that comparing the posteriors on the cosmological parameters resulting from fits to each quasar system separately, then jointly, provides a test of whether or not there are systematic errors in any one of these datasets?

    The details give me the heebie-jeebies: pairwise the test are based on the Bayes factor comparing the joint fit versus the completely independent fit which is a set up intrinsically geared in favour of rejecting the independent fit; ditto, for the all-but-one checks; also the Bayes factors seem like they might be computed from the BIC (certainly it is used elsewhere, so we’re talking an order 1 approximation). If you had a suspicion that there might be systematics would failing to reject the null hypothesis in this way give you comfort? OR would you try to be Bayesian and specify some models for what the systematic errors might look like in your measurements?

    All that to say, I’m not dumping on the whole idea of calculating between-dataset or between-experiment Bayes factors as some measure of discrepancy (though certainly not an absolute or perfect one). But I think this is nuts as a test of systematics, which is really another way of saying model misspecification.

    PS. Do you know your colleagues in climate change (sea level history reconstruction)? They’re doing cool things with integrated Gaussian processes etc.

  5. […] week I posted about new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about […]

  6. […] using the cosmic microwave background and the Cepheid distance scale I discussed, for example, here. This is illustrated nicely by the following couple of […]

  7. […] The above Figure is taken from the paper I blogged about a few days ago here. […]

  8. […] You can click on this to make it bigger. You will see that this approach gives a `high’ value of H0 ≈ 74.2, consistent with local stellar distances measures, rather than with the `cosmological’ value which comes in around H0 ≈ 67 or so. It’s also consistent with the value derived from other gravitational lens studies discussed here. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: