I recently came across a comprehensive review article on the arXiv and thought some of my regular readers might find it interesting as a description of the current state of play in cosmology. The paper is called Challenges for ΛCDM: An update and is written by Leandros Perivolaropoulos and Foteini Skara.

Here is the abstract:

A number of challenges of the standard ΛCDM model has been emerging during the past few years as the accuracy of cosmological observations improves. In this review we discuss in a unified manner many existing signals in cosmological and astrophysical data that appear to be in some tension (2σ or larger) with the standard ΛCDM model as defined by the Planck18 parameter values. In addition to the major well studied 5σ challenge of ΛCDM (the Hubble H0 crisis) and other well known tensions (the growth tension and the lensing amplitude AL anomaly), we discuss a wide range of other less discussed less-standard signals which appear at a lower statistical significance level than the H0 tension (also known as ‘curiosities’ in the data) which may also constitute hints towards new physics. For example such signals include cosmic dipoles (the fine structure constant α, velocity and quasar dipoles), CMB asymmetries, BAO Lyα tension, age of the Universe issues, the Lithium problem, small scale curiosities like the core-cusp and missing satellite problems, quasars Hubble diagram, oscillating short range gravity signals etc. The goal of this pedagogical review is to collectively present the current status of these signals and their level of significance, with emphasis to the Hubble crisis and refer to recent resources where more details can be found for each signal. We also briefly discuss possible theoretical approaches that can potentially explain the non-standard nature of some of these signals.

Among the useful things in it you will find this summary of the current ‘tension’ over the Hubble constant that I’ve posted about numerous times (e.g. here):

I was idly wondering earlier this week when the annual list of new Fellows elected to the Royal Society would be published, as it is normally around this time of year. Today it finally emerged and can be found here.

I am particularly delighted to see that my erstwhile Cardiff colleague Bernard Schutz (with whom I worked in the Data Innovation Research Institute and the School of Physics & Astronomy) is now an FRS! In fact I have known Bernard for quite a long time – he chaired the Panel that awarded me an SERC Advanced Fellowship in the days before STFC, and even before PPARC, way back in 1993. It just goes to show that even the most eminent scientists do occasionally make mistakes…

Anyway, hearty congratulations to Bernard, whose elevation to the Royal Society follows the award, a couple of years ago, of the Eddington Medal of the Royal Astronomical Society about which I blogged here. The announcement from the Royal Society is rather brief:

Bernard Schutz is honoured for his work driving the field of gravitational wave searches, leading to their direct detection in 2015.

I report here how gravitational wave observations can be used to determine the Hubble constant, H 0. The nearly monochromatic gravitational waves emitted by the decaying orbit of an ultra–compact, two–neutron–star binary system just before the stars coalesce are very likely to be detected by the kilometre–sized interferometric gravitational wave antennas now being designed1–4. The signal is easily identified and contains enough information to determine the absolute distance to the binary, independently of any assumptions about the masses of the stars. Ten events out to 100 Mpc may suffice to measure the Hubble constant to 3% accuracy.

In this paper, Bernard points out that a binary coalescence — such as the merger of two neutron stars — is a self calibrating `standard candle’, which means that it is possible to infer directly the distance without using the cosmic distance ladder. The key insight is that the rate at which the binary’s frequency changes is directly related to the amplitude of the gravitational waves it produces, i.e. how `loud’ the GW signal is. Just as the observed brightness of a star depends on both its intrinsic luminosity and how far away it is, the strength of the gravitational waves received at LIGO depends on both the intrinsic loudness of the source and how far away it is. By observing the waves with detectors like LIGO and Virgo, we can determine both the intrinsic loudness of the gravitational waves as well as their loudness at the Earth. This allows us to directly determine distance to the source.

It may have taken 31 years to get a measurement, but hopefully it won’t be long before there are enough detections to provide greater precision – and hopefully accuracy! – than the current methods can manage!

Here is a short video of Bernard himself talking about his work:

Once again, congratulations to Bernard on a very well deserved election to a Fellowship of the Royal Society.

The point is that if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances can differ only by a factor of (1+z)^{2} where z is the redshift: D_{L}=D_{A}(1+z)^{2}.

I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:

The result D_{L}=D_{A}(1+z)^{2} is an example of Etherington’s Reciprocity Theorem and it does not depend on a particular energy-momentum tensor; the redshift of a source just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between.

Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons would violate the theorem. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility, as is the possibility of having a space-time with torsion, i.e. a non-Riemannian space-time.

Another possibility you might think of is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness to provide a departure from the simple relationship given above. In fact a inhomogeneous cosmological model based on GR does not in itself violate Etherington’s theorem, but it means that the relation D_{L}=D_{A}(1+z)^{2} is no longer global. In such models there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer. In order to test this idea one would have to have luminosity distances and angular diameter distances for each source. The most distant objects for which we have luminosity distance measures are supernovae, and we don’t usually have angular-diameter distances for them.

Anyway, these thoughts popped back into my head when I saw a new paper on the arXiv by Holanda et al, the abstract of which is here:

Here we have an example of a set of sources (galaxy clusters) for which we can estimate both luminosity and angular-diameter distances (the latter using gravitational lensing) and thus test the reciprocity relation (called the cosmic distance duality relation in the paper). The statistics aren’t great but the result is consistent with the standard theory, as are previous studies mentioned in the paper. So there’s no need yet to turn the Hubble tension into torsion!

The full paper (i.e. author list plus a small amount of text) can be found here. Here are two plots from that work.

The first shows the constraints from the six loudest gravitational wave events selected for the latest work, together with the two competing measurements from Planck and SH0ES:

As you can see the individual measurements do not constrain very much. The second plot shows the effect of combining all relevant data, including a binary neutron star merger with an electromagnetic counterparts. The results are much stronger when the latter is included

Obviously this measurement isn’t yet able to resolve the alleged tension between “high” and “low” values described on this blog passim, but it’s early days. If LIGO reaches its planned sensitivity the next observing run should provide many more events. A few hundred should get the width of the posterior distribution shown in the second figure down to a few percent, which would be very interesting indeed!

A rather pugnacious paper by George Efstathiou appeared on the arXiv earlier this week. Here is the abstract:

This paper investigates whether changes to late time physics can resolve the `Hubble tension’. It is argued that many of the claims in the literature favouring such solutions are caused by a misunderstanding of how distance ladder measurements actually work and, in particular, by the inappropriate use of distance ladder H0 priors. A dynamics-free inverse distance ladder shows that changes to late time physics are strongly constrained observationally and cannot resolve the discrepancy between the SH0ES data and the base LCDM cosmology inferred from Planck.

For a more detailed discussion of this paper, see Sunny Vagnozzi’s blog post. I’ll just make some general comments on the context.

One of the reactions to the alleged “tension” between the two measurements of H_{0} is to alter the standard model in such a way that the equation of state changes significantly at late cosmological times. This is because the two allegedly discrepant sets of measures of the cosmological distance scale (seen, for example, in the diagram below taken from the paper I blogged about a while ago here) differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types).

That is basically true. There is, however, another difference in the two types of distance determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are based on luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectrum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.

Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.

With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?

Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)^{2} where z is the redshift: D_{L}=D_{A}(1+z)^{2}.

I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:

The result D_{L}=D_{A}(1+z)^{2} is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?

Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity). In the light of this, the fact there are significant numbers of theorists pushing for such things as interacting dark-energy models to engineer late-time changes in expansion history is indeed a bit perplexing.

In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.

In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.

Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.

Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form. The reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.

All of this begs the question of whether or not there is real tension in the H_{0} measures. I certainly have better things to get tense about. That gives me an excuse to include my long-running poll on the issue:

It is of course interesting in itself to see the cut and thrust of scientific debate on a live topic such as this, but in my mind at least it raises interesting questions about the nature of scientific publication. To repeat something I wrote a a while ago, it seems to me that the scientific paper published in an academic journal is an anachronism. Digital technology enables us to communicate ideas far more rapidly than in the past and allows much greater levels of interaction between researchers. I agree with Daniel Shanahan that the future for many fields will be defined not in terms of “papers” which purport to represent “final” research outcomes, but by living documents continuously updated in response to open scrutiny by the research community.

The Open Journal of Astrophysics is innovative in some ways but remains wedded to the paper as its fundamental object, and the platform is not able to facilitate interaction with readers. Of course one of the worries is that the comment facilities on many websites tend to get clogged up with mindless abuse, but I think that is manageable. I have some ideas on this, but for the time being I’m afraid all my energies are taken up with other things so this is for the future.

I’ve long argued that the modern academic publishing industry is not facilitating but hindering the communication of research. The arXiv has already made academic journals virtually redundant in many of branches of physics and astronomy; other disciplines will inevitably follow. The age of the academic journal is drawing to a close, and it is consequently time to rethink the concept of a paper.

These are busy days in cosmological circles, especially regarding the Hubble Constant controversy. The latest contribution to appear on the arXiv is by George Efstathiou of Cambridge. Here is the abstract:

I don’t know if George has voted in my ongoing poll relating to this issue, but I bet that if he did he would vote low – along with the majority (so far):

Incidentally, I have seen no evidence of Russian interference in the voting.

Given yesterday’s news from the Atacama Cosmology Telescope, among other things suggesting a low value of the Hubble constant of around 67.6 km s^{-1} Mpc^{-1}, it might be fun to run another totally unscientific poll about which of the two Hubble constant camps has the most support in the community. The two camps are:

A `high’ value H_{0} ~ 73.5 ± 1.5 km s^{-1} Mpc^{-1 }(as favoured by most stellar distance indicators, i.e. `local’ measurements).

A `low’ value H_{0} ~ 67.5 ± 0.5 km s^{-1} Mpc^{-1 }(as favoured by most `cosmological’ estimates, e.g. cosmic microwave background fluctuations).

Of course you might also believe that both are wrong and the `true’ result lies outside both error regions but I’d like to focus on these two possibilities, so the question is posed assuming that one of them is right, which one is that most likely to be. In your opinion. Humble or otherwise.

There’s some excitement in cosmological circles with the announcement of new results from the Atacama Cosmology Telescope, which is situated in the Atacama Desert in Chile. The two papers describing the new results can be found on the arXiv here and here and the data set will be made available here (it is Data Release 4; or DR4 for short).

If you want a laugh, the structure in the above map is on arc-minute scales – exactly the sort of thing I was trying to simulate way back in the 1980s. Here’s an ancient monochrome plot! The contours show 1σ, 2σ and 3σ fluctuations above the mean rather than the full distribution shown in the map above.

The full results will be discussed at a Zoom presentation at 11am Eastern Time (4pm Irish Time). I suspect it will be very busy so you will have to register in advance.

UPDATE: The Webinar is over but was recorded. I will post a link to the video when it is available. You can then guess which question was mine!

The new results from ACTPol are consistent with those from Planck, even down to the colour scheme used for the map, but the line taken by most media presentations I’ve seen (e.g. here and here) has been the issue of the Hubble Constant. The value of around 67.6 km s^{-1} Mpc^{-1} obtained by the Atacama Cosmology Telescope, though consistent with Planck measurements, is lower than most distance-scale measurements of H_{0}. The dichotomy between `low’ estimates from cosmological observations and `high’ values persists.

This gives me an excuse to include my poll again:

There have been nearly a thousand responses so far, with opinion very divided.

The burning question however is when will face masks featuring the above map be made available for purchase? It could be a nice little earner…

Here is another one of those Cosmology Talks curated on YouTube by Shaun Hotchkiss.

In the talk, Colin Hill explains how even though early dark energy can alleviate the Hubble tension, it does so at the expense of increasing other tension. Early dark energy can raise the predicted expansion rate inferred from the cosmic microwave background (CMB), by changing the sound horizon at the last scattering surface. However, the early dark energy also suppresses the growth of perturbations that are within the horizon while it is active. This mean that, in order to fit the CMB power spectrum the matter density must increase (and the spectral index becomes more blue tilted) and the amplitude of the matter power spectrum should get bigger. In their paper, Colin and his coauthors show that this affects the weak lensing measurements by DES, KiDS and HSC, so that including those experiments in a full data analysis makes things discordant again. The Hubble parameter is pulled back down, restoring most of the tension between local and CMB measurements of H0, and the tension in S_8 gets magnified by the increased mismatch in the predicted and measured matter power spectrum.

The overall moral of this story is the current cosmological models are so heavily constrained by the data that a relatively simple fix in one one part of the model space tends to cause problems elsewhere. It’s a bit like one of those puzzles in which you have to arrange all the pieces in a magic square but every time you move one bit you mess up the others.

The paper that accompanies this talk can be found here.

And here’s my long-running poll about the Hubble tension:

The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be abusive will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.