Archive for Hubble constant

Who’s worried about the Hubble Constant?

Posted in The Universe and Stuff with tags , , , , on January 11, 2018 by telescoper

One of the topics that is bubbling away on the back burner of cosmology is the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter.

Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:

Planck_parameters

The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68  km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.

About 18 months I blogged about a “direct” determination of the Hubble constant by Riess et al.  using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc, hinting at a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. A news item on the BBC hot off the press reports that a more recent analysis by the same group is stubbornly sitting around the same value of the Hubble constant, with a slight smaller error so that the discrepancy is now about 3.4σ. On the other hand, the history of this type of study provides grounds for caution because the systematic errors have often turned out to be much larger and more uncertain than the statistical errors…

Nevertheless, I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters.

If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.

Anyway, since polls seem to be quite popular these days, so let me resurrect this old one and see if opinions have changed!

 

Advertisements

Determining the Hubble Constant the Bernard Schutz way

Posted in The Universe and Stuff with tags , , , , , on October 19, 2017 by telescoper

In my short post about Monday’s announcement of the detection of a pair of coalescing neutron stars (GW170817), I mentioned that one of the results that caught my eye in particular was the paper about using such objects to determine the Hubble constant.

Here is the key result from that paper, i.e. the posterior distribution of the Hubble constant H0 given the data from GW170817:

You can also see latest determinations from other methods, which appear to be in (slight) tension; you can read more about this here. Clearly the new result from GW170817 yields a fairly broad range for H0 but, as I said in my earlier post, it’s very impressive to be straddling the target with the first salvo.

Anyway, I just thought I’d mention here that the method of measuring the Hubble constant using coalescing binary neutron stars was invented by none other than Bernard Schutz of Cardiff University, who works in the Data Innovation Institute (as I do). The idea was first published in September 1986 in a Letter to Nature. Here is the first paragraph:

I report here how gravitational wave observations can be used to determine the Hubble constant, H 0. The nearly monochromatic gravitational waves emitted by the decaying orbit of an ultra–compact, two–neutron–star binary system just before the stars coalesce are very likely to be detected by the kilometre–sized interferometric gravitational wave antennas now being designed1–4. The signal is easily identified and contains enough information to determine the absolute distance to the binary, independently of any assumptions about the masses of the stars. Ten events out to 100 Mpc may suffice to measure the Hubble constant to 3% accuracy.

In in the paper, Bernard points out that a binary coalescence — such as the merger of two neutron stars — is a self calibrating `standard candle’, which means that it is possible to infer directly the distance without using the cosmic distance ladder. The key insight is that the rate at which the binary’s frequency changes is directly related to the amplitude of the gravitational waves it produces, i.e. how `loud’ the GW signal is. Just as the observed brightness of a star depends on both its intrinsic luminosity and how far away it is, the strength of the gravitational waves received at LIGO depends on both the intrinsic loudness of the source and how far away it is. By observing the waves with detectors like LIGO and Virgo, we can determine both the intrinsic loudness of the gravitational waves as well as their loudness at the Earth. This allows us to directly determine distance to the source.

It may have taken 31 years to get a measurement, but hopefully it won’t be long before there are enough detections to provide greater precision – and hopefully accuracy! – than the current methods can manage!

Above all, congratulations to Bernard for inventing a method which has now been shown to work very well!

Cosmology at a Crossroads – Poll

Posted in The Universe and Stuff with tags , , , on June 13, 2017 by telescoper

A short comment piece by Wendy Freedman has appeared in Nature Astronomy; there’s a free version on the arXiv here. It gives a nice perspective on the current debate about the value of the Hubble constant from the point of view of an expert on cosmological distance scale measurements.

The abstract is here:

We are at an interesting juncture in cosmology. With new methods and technology, the accuracy in measurement of the Hubble constant has vastly improved, but a recent tension has arisen that is either signaling new physics or as-yet unrecognized uncertainties.

For the record, I’d go for `as-yet unrecognized uncertainties’, primarily because this field has a long history of drastically underestimated error-bars!

However, the publication of this piece gives me the excuse to resurrect the following poll, in which I invite you to participate:

A New Measurement of the Expansion Rate of the Universe – Adam Riess

Posted in The Universe and Stuff with tags , , on May 14, 2017 by telescoper

Here’s a nice talk by Nobel Laureate Adam Riess delivered on May 11th at the Harvard-Smithsonian Centre Center for Astrophysics and is now available for you to watch at your leisure. It’s an hour long, but well worth watching if you’re interested in cosmology in general and in apparent tension between different determinations of the Hubble constant in particular.

Here’s the description of the talk, which is introduced first by Bach and Daniel Eisenstein:

The Hubble constant remains one of the most important parameters in the cosmological model, setting the size and age scales of the Universe. Present uncertainties in the cosmological model including the nature of dark energy, the properties of neutrinos and the scale of departures from flat geometry can be constrained by measurements of the Hubble constant made to higher precision than was possible with the first generations of Hubble Telescope instruments. A streamlined distance ladder constructed from infrared observations of Cepheids and type Ia supernovae with ruthless attention paid to systematics now provide 2.4% precision and offer the means to do much better. By steadily improving the precision and accuracy of the Hubble constant, we now see evidence for significant deviations from the standard model, referred to as LambdaCDM, and thus the exciting chance, if true, of discovering new fundamental physics such as exotic dark energy, a new relativistic particle, or a small curvature to name a few possibilities. I will review recent and expected progress.

And here’s the talk in full.

After watching the video you be interested in voting in my totally unscientific poll on the matter:

Tension in the Hubble constant

Posted in The Universe and Stuff with tags , , on February 28, 2017 by telescoper

A few months ago I blogged about the apparent “tension” between different measurements of the Hubble constant. Here is an alternative view of the situation, with some recent updates. The plot has thickened a bit, but it’s still unclear to me whether there’s really a significant discrepancy.

Anyway, here’s a totally unscientific poll on the issue! Do feel free to register your vote.

Triton Station

There has been some hand-wringing of late about the tension between the value of the expansion rate of the universe – the famous Hubble constant, H, measured directly from observed redshifts and distances, and that obtained by multi-parameter fits to the cosmic microwave background. Direct determinations consistently give values in the low to mid-70s, like Riess et al. (2016): H = 73.24 ± 1.74 km/s/Mpc while the latest CMB fit from Planck gives H = 67.8 ± 0.9 km/s/Mpc. These are formally discrepant at a modest level: enough to be annoying, but not enough to be conclusive.

The widespread presumption is that there is a subtle systematic error somewhere. Who is to blame depends on what you work on. People who work on the CMB and appreciate its phenomenal sensitivity to cosmic geometry generally presume the problem is with galaxy measurements. To people who work on local galaxies, the CMB value is…

View original post 1,029 more words

Should we worry about the Hubble Constant?

Posted in The Universe and Stuff with tags , , , , on July 27, 2016 by telescoper

One of the topics that came up in the discussion sessions at the meeting I was at over the weekend was the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter. Coincidentally, I found this old preprint while tidying up my office yesterday:

Cosmo_params

Things have changed quite a bit since 1979! Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:

Planck_parameters

The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68  km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.

Note however that a recent “direct” determination of the Hubble constant by Riess et al.  using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc. Had these two values been obtained in 1979 we wouldn’t have worried because the errors would have been much larger, but nowadays the measurements are much more precise and there does seem to be a hint of a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. On the other hand the history of Hubble constant determinations is one of results being quoted with very small “internal” errors that turned out to be much smaller than systematic uncertainties.

I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters. If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.

It’s quite interesting to think  how much we scientists tend to carry on despite the signs that things might be wrong. Take, for example, Newton’s Gravitational Constant, G. Measurements of this parameter are extremely difficult to do, but different experiments do seem to be in disagreement with each other. If Newtonian gravity turned out to be wrong that would indeed be extremely exciting, but I think it’s a wiser bet that there are uncontrolled experimental systematics. On the other hand there is a danger that we might ignore evidence that there’s something fundamentally wrong with our theory. It’s sometimes a difficult judgment how seriously to take experimental results.

Anyway, I don’t know what cosmologists think in general about this so there’s an excuse for a poll: