Hubble Tension: an “Alternative” View?
There was a new paper last week on the arXiv by Sunny Vagnozzi about the Hubble constant controversy (see this blog passim). I was going to refrain from commenting but I see that one of the bloggers I follow has posted about it so I guess a brief item would not be out of order.
Here is the abstract of the Vagnozzi paper:
I posted this picture last week which is relevant to the discussion:
The point is that if you allow the equation of state parameter w to vary from the value of w=-1 that it has in the standard cosmology then you get a better fit. However, it is one of the features of Bayesian inference that if you introduce a new free parameter then you have to assign a prior probability over the space of values that parameter could hold. That prior penalty is carried through to the posterior probability. Unless the new model fits observational data significantly better than the old one, this prior penalty will lead to the new model being disfavoured. This is the Bayesian statement of Ockham’s Razor.
The Vagnozzi paper represents a statement of this in the context of the Hubble tension. If a new floating parameter w is introduced the data prefer a value less than -1 (as demonstrated in the figure) but on posterior probability grounds the resulting model is less probable than the standard cosmology for the reason stated above. Vagnozzi then argues that if a new fixed value of, say, w = -1.3 is introduced then the resulting model is not penalized by having to spread the prior probability out over a range of values but puts all its prior eggs in one basket labelled w = -1.3.
This is of course true. The problem is that the value of w = -1.3 does not derive from any ab initio principle of physics but by a posteriori of the inference described above. It’s no surprise that you can get a better answer if you know what outcome you want. I find that I am very good at forecasting the football results if I make my predictions after watching Final Score…
Indeed, many cosmologists think any value of w < -1 should be ruled out ab initio because they don’t make physical sense anyway.
Follow @telescoper
July 25, 2019 at 1:54 pm
My thoughts exactly. Some relevant papers (one per comment to avoid moderation delays): https://arxiv.org/abs/0903.4210
July 25, 2019 at 1:54 pm
https://arxiv.org/abs/astro-ph/0401198
July 25, 2019 at 1:56 pm
https://arxiv.org/abs/astro-ph/0608184
July 25, 2019 at 1:56 pm
https://arxiv.org/abs/astro-ph/0701113
July 25, 2019 at 2:10 pm
Back in the old days, one had multiple contours and/or greyscales when making plots like this. Today, we have more advance graphics sofware. So what do people do? Plot only two contours and, instead of a greyscale, a constant level (between contours—in colour, but this provides no more information than a fixed shade of grey). Why? The probability is, of course, not uniform between the contours, and it does not necessarily have the same gradient everywhere. Why throw away information, especially when artificially sharp boundaries distort the impression one should get from the data?
July 26, 2019 at 6:21 pm
Isn’t he doing something similar to Profile likelihood where nuisance parameters are maxmised?