## Testing Cosmological Reciprocity

Posted in The Universe and Stuff with tags , , on April 13, 2021 by telescoper

I have posted a few times about Etherington’s Reciprocity Theorem in cosmology, largely in connection with the Hubble constant tension – see, e.g., here.

The point is that if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances can differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.

I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:

The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem and it does not depend on a particular energy-momentum tensor; the redshift of a source just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between.

Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons would violate the theorem. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility, as is the possibility of having a space-time with torsion, i.e. a non-Riemannian space-time.

Another possibility you might think of is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness to provide a departure from the simple relationship given above. In fact a inhomogeneous cosmological model based on GR does not in itself violate Etherington’s theorem, but it means that the relation DL=DA(1+z)2 is no longer global. In such models there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer. In order to test this idea one would have to have luminosity distances and angular diameter distances for each source. The most distant objects for which we have luminosity distance measures are supernovae, and we don’t usually have angular-diameter distances for them.

Anyway, these thoughts popped back into my head when I saw a new paper on the arXiv by Holanda et al, the abstract of which is here: Here we have an example of a set of sources (galaxy clusters) for which we can estimate both luminosity and angular-diameter distances (the latter using gravitational lensing) and thus test the reciprocity relation (called the cosmic distance duality relation in the paper). The statistics aren’t great but the result is consistent with the standard theory, as are previous studies mentioned in the paper. So there’s no need yet to turn the Hubble tension into torsion!

## Non-Solutions to the Hubble Constant Problem…

Posted in The Universe and Stuff with tags , , , , on March 18, 2021 by telescoper

A rather pugnacious paper by George Efstathiou appeared on the arXiv earlier this week. Here is the abstract:

This paper investigates whether changes to late time physics can resolve the `Hubble tension’. It is argued that many of the claims in the literature favouring such solutions are caused by a misunderstanding of how distance ladder measurements actually work and, in particular, by the inappropriate use of distance ladder H0 priors. A dynamics-free inverse distance ladder shows that changes to late time physics are strongly constrained observationally and cannot resolve the discrepancy between the SH0ES data and the base LCDM cosmology inferred from Planck.

For a more detailed discussion of this paper, see Sunny Vagnozzi’s blog post. I’ll just make some general comments on the context.

One of the reactions to the alleged “tension” between the two measurements of H0 is to alter the standard model in such a way that the equation of state changes significantly at late cosmological times. This is because the two allegedly discrepant sets of measures of the cosmological distance scale (seen, for example, in the diagram below  taken from the paper I blogged about a while ago here) differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types). That is basically true. There is, however, another difference in the two types of distance determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are based on luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectrum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.

Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.

With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?

Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.

I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:

The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?

Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity). In the light of this, the fact there are significant numbers of theorists pushing for such things as interacting dark-energy models to engineer late-time changes in expansion history is indeed a bit perplexing.

In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.

In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.

Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.

Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form. The reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.

All of this begs the question of whether or not there is real tension in the  H0 measures. I certainly have better things to get tense about. That gives me an excuse to include my long-running poll on the issue:

## Thoughts on Cosmological Distances

Posted in The Universe and Stuff with tags , , , , , on July 18, 2019 by telescoper

At the risk of giving the impression that I’m obsessed with the issue of the Hubble constant, I thought I’d do a quick post about something vaguely related to that which I happened to be thinking about the other night.

It has been remarked that the two allegedly discrepant sets of measures of the cosmological distance scale seen, for example, in the diagram below differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types). The above Figure is taken from the paper I blogged about a few days ago here.

That is basically true. There is, however, another difference in the two types of determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.

Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.

With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?

Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.

I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:

The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?

Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity).

In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.

In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.

Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.

Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form.

Addendum: just to clarify one point, the reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.