The full paper (i.e. author list plus a small amount of text) can be found here. Here are two plots from that work.

The first shows the constraints from the six loudest gravitational wave events selected for the latest work, together with the two competing measurements from Planck and SH0ES:

As you can see the individual measurements do not constrain very much. The second plot shows the effect of combining all relevant data, including a binary neutron star merger with an electromagnetic counterparts. The results are much stronger when the latter is included

Obviously this measurement isn’t yet able to resolve the alleged tension between “high” and “low” values described on this blog passim, but it’s early days. If LIGO reaches its planned sensitivity the next observing run should provide many more events. A few hundred should get the width of the posterior distribution shown in the second figure down to a few percent, which would be very interesting indeed!

A rather pugnacious paper by George Efstathiou appeared on the arXiv earlier this week. Here is the abstract:

This paper investigates whether changes to late time physics can resolve the `Hubble tension’. It is argued that many of the claims in the literature favouring such solutions are caused by a misunderstanding of how distance ladder measurements actually work and, in particular, by the inappropriate use of distance ladder H0 priors. A dynamics-free inverse distance ladder shows that changes to late time physics are strongly constrained observationally and cannot resolve the discrepancy between the SH0ES data and the base LCDM cosmology inferred from Planck.

For a more detailed discussion of this paper, see Sunny Vagnozzi’s blog post. I’ll just make some general comments on the context.

One of the reactions to the alleged “tension” between the two measurements of H_{0} is to alter the standard model in such a way that the equation of state changes significantly at late cosmological times. This is because the two allegedly discrepant sets of measures of the cosmological distance scale (seen, for example, in the diagram below taken from the paper I blogged about a while ago here) differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types).

That is basically true. There is, however, another difference in the two types of distance determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are based on luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectrum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.

Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.

With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?

Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)^{2} where z is the redshift: D_{L}=D_{A}(1+z)^{2}.

I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:

The result D_{L}=D_{A}(1+z)^{2} is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?

Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity). In the light of this, the fact there are significant numbers of theorists pushing for such things as interacting dark-energy models to engineer late-time changes in expansion history is indeed a bit perplexing.

In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.

In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.

Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.

Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form. The reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.

All of this begs the question of whether or not there is real tension in the H_{0} measures. I certainly have better things to get tense about. That gives me an excuse to include my long-running poll on the issue:

It is of course interesting in itself to see the cut and thrust of scientific debate on a live topic such as this, but in my mind at least it raises interesting questions about the nature of scientific publication. To repeat something I wrote a a while ago, it seems to me that the scientific paper published in an academic journal is an anachronism. Digital technology enables us to communicate ideas far more rapidly than in the past and allows much greater levels of interaction between researchers. I agree with Daniel Shanahan that the future for many fields will be defined not in terms of “papers” which purport to represent “final” research outcomes, but by living documents continuously updated in response to open scrutiny by the research community.

The Open Journal of Astrophysics is innovative in some ways but remains wedded to the paper as its fundamental object, and the platform is not able to facilitate interaction with readers. Of course one of the worries is that the comment facilities on many websites tend to get clogged up with mindless abuse, but I think that is manageable. I have some ideas on this, but for the time being I’m afraid all my energies are taken up with other things so this is for the future.

I’ve long argued that the modern academic publishing industry is not facilitating but hindering the communication of research. The arXiv has already made academic journals virtually redundant in many of branches of physics and astronomy; other disciplines will inevitably follow. The age of the academic journal is drawing to a close, and it is consequently time to rethink the concept of a paper.

These are busy days in cosmological circles, especially regarding the Hubble Constant controversy. The latest contribution to appear on the arXiv is by George Efstathiou of Cambridge. Here is the abstract:

I don’t know if George has voted in my ongoing poll relating to this issue, but I bet that if he did he would vote low – along with the majority (so far):

Incidentally, I have seen no evidence of Russian interference in the voting.

Given yesterday’s news from the Atacama Cosmology Telescope, among other things suggesting a low value of the Hubble constant of around 67.6 km s^{-1} Mpc^{-1}, it might be fun to run another totally unscientific poll about which of the two Hubble constant camps has the most support in the community. The two camps are:

A `high’ value H_{0} ~ 73.5 ± 1.5 km s^{-1} Mpc^{-1 }(as favoured by most stellar distance indicators, i.e. `local’ measurements).

A `low’ value H_{0} ~ 67.5 ± 0.5 km s^{-1} Mpc^{-1 }(as favoured by most `cosmological’ estimates, e.g. cosmic microwave background fluctuations).

Of course you might also believe that both are wrong and the `true’ result lies outside both error regions but I’d like to focus on these two possibilities, so the question is posed assuming that one of them is right, which one is that most likely to be. In your opinion. Humble or otherwise.

There’s some excitement in cosmological circles with the announcement of new results from the Atacama Cosmology Telescope, which is situated in the Atacama Desert in Chile. The two papers describing the new results can be found on the arXiv here and here and the data set will be made available here (it is Data Release 4; or DR4 for short).

If you want a laugh, the structure in the above map is on arc-minute scales – exactly the sort of thing I was trying to simulate way back in the 1980s. Here’s an ancient monochrome plot! The contours show 1σ, 2σ and 3σ fluctuations above the mean rather than the full distribution shown in the map above.

The full results will be discussed at a Zoom presentation at 11am Eastern Time (4pm Irish Time). I suspect it will be very busy so you will have to register in advance.

UPDATE: The Webinar is over but was recorded. I will post a link to the video when it is available. You can then guess which question was mine!

The new results from ACTPol are consistent with those from Planck, even down to the colour scheme used for the map, but the line taken by most media presentations I’ve seen (e.g. here and here) has been the issue of the Hubble Constant. The value of around 67.6 km s^{-1} Mpc^{-1} obtained by the Atacama Cosmology Telescope, though consistent with Planck measurements, is lower than most distance-scale measurements of H_{0}. The dichotomy between `low’ estimates from cosmological observations and `high’ values persists.

This gives me an excuse to include my poll again:

There have been nearly a thousand responses so far, with opinion very divided.

The burning question however is when will face masks featuring the above map be made available for purchase? It could be a nice little earner…

Here is another one of those Cosmology Talks curated on YouTube by Shaun Hotchkiss.

In the talk, Colin Hill explains how even though early dark energy can alleviate the Hubble tension, it does so at the expense of increasing other tension. Early dark energy can raise the predicted expansion rate inferred from the cosmic microwave background (CMB), by changing the sound horizon at the last scattering surface. However, the early dark energy also suppresses the growth of perturbations that are within the horizon while it is active. This mean that, in order to fit the CMB power spectrum the matter density must increase (and the spectral index becomes more blue tilted) and the amplitude of the matter power spectrum should get bigger. In their paper, Colin and his coauthors show that this affects the weak lensing measurements by DES, KiDS and HSC, so that including those experiments in a full data analysis makes things discordant again. The Hubble parameter is pulled back down, restoring most of the tension between local and CMB measurements of H0, and the tension in S_8 gets magnified by the increased mismatch in the predicted and measured matter power spectrum.

The overall moral of this story is the current cosmological models are so heavily constrained by the data that a relatively simple fix in one one part of the model space tends to cause problems elsewhere. It’s a bit like one of those puzzles in which you have to arrange all the pieces in a magic square but every time you move one bit you mess up the others.

The paper that accompanies this talk can be found here.

And here’s my long-running poll about the Hubble tension:

In my office today for the first time in a couple of months I stumbled across a folder containing the notes from the summer school for new Astronomy PhD students I attended in Durham in 1985. Yes, that’s thirty five years ago..

Among the lectures was a set given by Richard Ellis on Observational Cosmology from which I’ve taken this little snippet about the Hubble Constant:

It’s not only a trip down memory lane but also up the cosmological distance ladder! You will see that there were two main estimates, one low and one high. Both turned out to be about three sigma away from the currently-favoured value of around 70.

Plus ça change, plus c’est la même chose…

Does this change your mind about today’s tension between another pair of “low” (67) and “high” (73) values?

Here’s another example from the series of cosmology talks being curated by Shaun Hotchkiss. In this one, esteemed astronomer and Nobel Prize winner Adam Riess talks about what he and collaborators considered to be the leading candidate for a systematic error in the SHOES measurement of the expansion rate of the Universe. This is “Cepheid crowding”, the possibility that background sources change our interpretation of Cepheid brightness, ruining one step in the SHOES distance ladder. Riess and collaborators devise a nice way to test whether the crowding is correctly accounted for and find that it is, so crowding cannot be the “explanation” of an error in the distance ladder measurement of H0. Riess also stresses that both the early and late universe measurements of H0 are now backed up by multiple different measurements. Accordingly, if the resolution isn’t fundamental physics, then no single systematic can entirely solve the tension.

P. S. The paper that accompanies this talk can be found on the arXiv here.

To avoid talking any more about you-know-what I thought I would continue the ongoing Hubble constant theme. Rhere is an interesting new paper on the arXiv (by Hill et al.) about the extent to which a modified form of dark energy might relieve the current apparent tension.

The abstract is:

You can click on this to make it bigger; you can also download the PDF here.

I think the conclusion is clear and it may or may not be related to a previous post of mine here about the implications of Etherington’s theorem.

Here’s my ongoing poll on the Hubble constant poll. Feel free to while away a few seconds of your time working from home casting a vote!

The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be abusive will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.