Archive for the Astrohype Category

A Spot of Hype

Posted in Astrohype, The Universe and Stuff with tags , , on May 19, 2017 by telescoper

A few weeks ago a paper came out in Monthly Notices of the Royal Astronomical Society (accompanied by a press release from the Royal Astronomical Society) about a possible explanation for the now-famous cold spot in the cosmic microwave background sky that I’ve blogged about on a number of occasions:

If the standard model of cosmology is correct then a spot as cold as this and as large as this is quite a rare event, occurring only about 1% of the time in sky patterns simulated using the model assumptions. One possible explanation of this ( which I’ve discussed before) is that this feature is generated not by density fluctuations in the primordial plasma (which are thought to cause the variation of temperature of the cosmic microwave background across the sky), but by something much more recent in the evolution of the Universe, namely a local large void in the matter distribution which would cause a temperature fluctuation by the Sachs-Wolfe Effect.

The latest paper by Mackenzie et al. (which can be found on the arXiv here) pours enough cold water on that explanation to drown it completely and wash away the corpse. A detailed survey of the galaxy distribution in the direction of the cold spot shows no evidence for an under-density deep enough to affect the CMB. But if the cold spot is not caused by a supervoid, what is it caused by?

Right at the end of the paper the authors discuss a few alternatives,  some of them invoking `exotic’ physics early in the Universe’s history. One such possibility arises if we live in an inflationary Universe in which our observable universe is just one of a (perhaps infinite) collection of bubble-like domains which are now causally disconnected. If our bubble collided with another bubble early on then it might distort the cosmic microwave background in our bubble, in much the same way that a collision with another car might damage your car’s bodywork.

For the record I’ve always found this explanation completely implausible. A simple energy argument suggests that if such a collision were to occur between two inflationary bubbles, it is much more likely to involve their mutual destruction than a small dint. In other words, both cars would be written off.

Nevertheless, the press have seized on this possible explanation, got hold of the wrong end of the stick and proceeded to beat about the bush with it. See, for example, the Independent headline: `Mysterious ‘cold spot’ in space could be proof of a parallel universe, scientists say’.

No. Actually, scientists don’t say that. In particular, the authors of the paper don’t say it either. In fact they don’t mention `proof’ at all. It’s pure hype by the journalists. I don’t blame Mackenzie et al, nor the RAS Press team. It’s just silly reporting.

Anyway, I’m sure I can hear you asking what I think is the origin of the cold spot. Well, the simple answer is that I don’t know for sure. The more complicated answer is that I strongly suspect that at least part of the explanation for why this patch of sky looks as cold as it does is tied up with another anomalous feature of the CMB, i.e. the hemispherical power asymmetry.

In the standard cosmological model the CMB fluctuations are statistically isotropic, which means the variance is the same everywhere on the sky. In observed maps of the microwave background, however, there is a slight but statistically significant variation of the variance, in such a way that the half of the sky that includes the cold spot has larger variance than the opposite half.

My suspicion is that the hemispherical power asymmetry is either an instrumental artifact (i.e. a systematic of the measurement) or is generated by improper substraction of foreground signals (from our galaxy or even from within the Solar system). Whatever causes it, this effect could well modulate the CMB temperature in such a way that it makes the cold spot look more impressive than it actually is. It seems to me that the cold spot could be perfectly consistent with the standard model if this hemispherical anomaly is taken into account. This may not be `exotic’ or `exciting’ or feed the current fetish for the multiverse, but I think it’s the simplest and most probable explanation.

Call me old-fashioned.

P.S. You might like to read this article by Alfredo Carpineti which is similarly sceptical!

Declining Rotation Curves at High Redshift?

Posted in Astrohype, The Universe and Stuff on March 20, 2017 by telescoper

I was thinking of doing my own blog about a recent high-profile result published in Nature by Genzel et al. (and on the arXiv here), but then I see that Stacy McGaugh has already done a much more thorough and better-informed job than I would have done, so instead of trying to emulate his effort I’ll just direct you to his piece.

A recent paper in Nature by Genzel et al. reports declining rotation curves for high redshift galaxies. I have been getting a lot of questions about this result, which would be very important if true. So I thought I’d share a few thoughts here. Nature is a highly reputable journal – in most fields of […]

via Declining Rotation Curves at High Redshift? — Triton Station

P.S. Don’t ask me why WordPress can’t render the figures properly.

Fake News of the Holographic Universe

Posted in Astrohype, The Universe and Stuff with tags , , , , , , on February 1, 2017 by telescoper

It has been a very busy day today but I thought I’d grab a few minutes to rant about something inspired by a cosmological topic but that I’m afraid is symptomatic of malaise that extends far wider than fundamental science.

The other day I found a news item with the title Study reveals substantial evidence of holographic universe. You can find a fairly detailed discussion of the holographic principle here, but the name is fairly self-explanatory: the familiar hologram is a two-dimensional object that contains enough information to reconstruct a three-dimensional object. The holographic principle extends this to the idea that information pertaining to a higher-dimensional space may reside on a lower-dimensional boundary of that space. It’s an idea which has gained some traction in the context of the black hole information paradox, for example.

There are people far more knowledgeable about the holographic principle than me, but naturally what grabbed my attention was the title of the news item: Study reveals substantial evidence of holographic universe. That got me really excited, as I wasn’t previously aware that there was any observed property of the Universe that showed any unambiguous evidence for the holographic interpretation or indeed that models based on this model could describe the available data better than the standard ΛCDM cosmological model. Naturally I went to the original paper on the arXiv by Niayesh Ashfordi et al. to which the news item relates. Here is the abstract:

We test a class of holographic models for the very early universe against cosmological observations and find that they are competitive to the standard ΛCDM model of cosmology. These models are based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and while they predict a different power spectrum from the standard power-law used in ΛCDM, they still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that ΛCDM does a better job globally, while the holographic models provide a (marginally) better fit to data without very low multipoles (i.e. l≲30), where the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT’s can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and potentially explain its apparent anomalies.

The third sentence (highlighted) states explicitly that according to the Bayesian evidence (see here for a review of this) the holographic models do not fit the data even as well as the standard model (unless some of the CMB measurements are excluded, and then they’re only slightly better)

I think the holographic principle is a very interesting idea and it may indeed at some point prove to provide a deeper understanding of our universe than our current models. Nevertheless it seems clear to me that the title of this news article is extremely misleading. Current observations do not really provide any evidence in favour of the holographic models, and certainly not “substantial evidence”.

The wider point should be obvious. We scientists rightly bemoan the era of “fake news”. We like to think that we occupy the high ground, by rigorously weighing up the evidence, drawing conclusions as objectively as possible, and reporting our findings with a balanced view of the uncertainties and caveats. That’s what we should be doing. Unless we do that we’re not communicating science but engaged in propaganda, and that’s a very dangerous game to play as it endangers the already fragile trust the public place in science.

The authors of the paper are not entirely to blame as they did not write the piece that kicked off this rant, which seems to have been produced by the press office at the University of Southampton, but they should not have consented to it being released with such a misleading title.

LIGO Echoes, P-values and the False Discovery Rate

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on December 12, 2016 by telescoper

Today is our staff Christmas lunch so I thought I’d get into the spirit by posting a grumbly article about a paper I found on the arXiv. In fact I came to this piece via a News item in Nature. Anyway, here is the abstract of the paper – which hasn’t been refereed yet:

In classical General Relativity (GR), an observer falling into an astrophysical black hole is not expected to experience anything dramatic as she crosses the event horizon. However, tentative resolutions to problems in quantum gravity, such as the cosmological constant problem, or the black hole information paradox, invoke significant departures from classicality in the vicinity of the horizon. It was recently pointed out that such near-horizon structures can lead to late-time echoes in the black hole merger gravitational wave signals that are otherwise indistinguishable from GR. We search for observational signatures of these echoes in the gravitational wave data released by advanced Laser Interferometer Gravitational-Wave Observatory (LIGO), following the three black hole merger events GW150914, GW151226, and LVT151012. In particular, we look for repeating damped echoes with time-delays of 8MlogM (+spin corrections, in Planck units), corresponding to Planck-scale departures from GR near their respective horizons. Accounting for the “look elsewhere” effect due to uncertainty in the echo template, we find tentative evidence for Planck-scale structure near black hole horizons at 2.9σ significance level (corresponding to false detection probability of 1 in 270). Future data releases from LIGO collaboration, along with more physical echo templates, will definitively confirm (or rule out) this finding, providing possible empirical evidence for alternatives to classical black holes, such as in firewall or fuzzball paradigms.

I’ve highlighted some of the text in bold. I’ve highlighted this because as written its wrong.

I’ve blogged many times before about this type of thing. The “significance level” quoted corresponds to a “p-value” of 0.0037 (or about 1/270). If I had my way we’d ban p-values and significance levels altogether because they are so often presented in a misleading fashion, as it is here.

What is wrong is that the significance level is not the same as the false detection probability.  While it is usually the case that the false detection probability (which is often called the false discovery rate) will decrease the lower your p-value is, these two quantities are not the same thing at all. Usually the false detection probability is much higher than the p-value. The physicist John Bahcall summed this up when he said, based on his experience, “about half of all 3σ  detections are false”. You can find a nice (and relatively simple) explanation of why this is the case here (which includes various references that are worth reading), but basically it’s because the p-value relates to the probability of seeing a signal at least as large as that observed under a null hypothesis (e.g.  detector noise) but says nothing directly about the probability of it being produced by an actual signal. To answer this latter question properly one really needs to use a Bayesian approach, but if you’re not keen on that I refer you to this (from David Colquhoun’s blog):

One problem with all of the approaches mentioned above was the need to guess at the prevalence of real effects (that’s what a Bayesian would call the prior probability). James Berger and colleagues (Sellke et al., 2001) have proposed a way round this problem by looking at all possible prior distributions and so coming up with a minimum false discovery rate that holds universally. The conclusions are much the same as before. If you claim to have found an effects whenever you observe a P value just less than 0.05, you will come to the wrong conclusion in at least 29% of the tests that you do. If, on the other hand, you use P = 0.001, you’ll be wrong in only 1.8% of cases.

Of course the actual false detection probability can be much higher than these limits, but they provide a useful rule of thumb,

To be fair the Nature item puts it more accurately:

The echoes could be a statistical fluke, and if random noise is behind the patterns, says Afshordi, then the chance of seeing such echoes is about 1 in 270, or 2.9 sigma. To be sure that they are not noise, such echoes will have to be spotted in future black-hole mergers. “The good thing is that new LIGO data with improved sensitivity will be coming in, so we should be able to confirm this or rule it out within the next two years.

Unfortunately, however, the LIGO background noise is rather complicated so it’s not even clear to me that this calculation based on “random noise”  is meaningful anyway.

The idea that the authors are trying to test is of course interesting, but it needs a more rigorous approach before any evidence (even “tentative” can be claimed). This is rather reminiscent of the problems interpreting apparent “anomalies” in the Cosmic Microwave Background, which is something I’ve been interested in over the years.

In summary, I’m not convinced. Merry Christmas.



A Non-accelerating Universe?

Posted in Astrohype, The Universe and Stuff with tags , , , , , on October 26, 2016 by telescoper

There’s been quite a lot of reaction on the interwebs over the last few days much of it very misleading; here’s a sensible account) to a paper by Nielsen, Guffanti and Sarkar which has just been published online in Scientific Reports, an offshoot of Nature. I think the above link should take you an “open access” version of the paper but if it doesn’t you can find the arXiv version here. I haven’t cross-checked the two versions so the arXiv one may differ slightly.

Anyway, here is the abstract:

The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.

Obviously I haven’t been able to repeat the statistical analysis but I’ve skimmed over what they’ve done and as far as I can tell it looks a fairly sensible piece of work (although it is a frequentist analysis). Here is the telling plot (from the Nature version)  in terms of the dark energy (y-axis) and matter (x-axis) density parameters:


Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter (a special case is the origin on the plot, which is called the Milne model and represents an entirely empty universe). The contours show “1, 2 and 3σ” contours, regarding all other parameters as nuisance parameters. It is true that the line of no acceleration does go inside the 3σcontour so in that sense is not entirely inconsistent with the data. On the other hand, the “best fit” (which is at the point Ωm=0.341, ΩΛ=0.569) does represent an accelerating universe.

I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. The CMB, for example, is particularly sensitive to spatial curvature which, measurements tells us, must be close to zero. The Milne model, on the other hand, has a large (negative) spatial curvature entirely excluded by CMB observations. Curvature is regarded as a “nuisance parameter” in the above diagram.

I think this paper is a worthwhile exercise. Subir Sarkar (one of the authors) in particular has devoted a lot of energy to questioning the standard ΛCDM model which far too many others accept unquestioningly. That’s a noble thing to do, and it is an essential part of the scientific method, but this paper only looks at one part of an interlocking picture. The strongest evidence comes from the cosmic microwave background and despite this reanalysis I feel the supernovae measurements still provide a powerful corroboration of the standard cosmology.

Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework, and statistics can’t help us with that!

New: Top Ten Gaia Facts!

Posted in Astrohype, The Universe and Stuff with tags , , , on September 14, 2016 by telescoper

After today’s first release of data by the Gaia Mission, as a service to the community, for the edification of the public at large, and by popular demand, here is a list of Top Ten Gaia Facts.

Gaia looks nothing like the Herschel Space Observatory shown here.

Gaia looks nothing like the Herschel Space Observatory shown here.


  1. The correct pronunciation of GAIA is as in “gayer”. Please bear this in mind when reading any press articles about the mission.
  2. The GAIA spacecraft will orbit the Sun at the Second Lagrange Point, the only place in the Solar System where the  effects of cuts in the UK science budget can not be felt.
  3. The data processing challenges posed by GAIA are immense; the billions of astrometric measurements resulting from the mission will be analysed using the world’s biggest Excel Spreadsheet.
  4. To provide secure backup storage of the complete GAIA data set, the European Space Agency has commandeered the world’s entire stock of 3½ inch floppy disks.
  5. As well as measuring billions of star positions and velocities, GAIA is expected to discover thousands of new asteroids and the hiding place of Lord Lucan.
  6. GAIA can measure star positions to an accuracy of a few microarcseconds. That’s the angle subtended by a single pubic hair at a distance of 1000km.
  7. The precursor to GAIA was a satellite called Hipparcos, which is not how you spell Hipparchus.
  8. The BBC will be shortly be broadcasting a new 26-part TV series about GAIA. Entitled WOW! Gaia! That’s Soo Amaazing… it will be presented by Britain’s leading expert on astrometry, Professor Brian Cox.
  9. Er…
  10. That’s it.

From Sappho to Babbage

Posted in Astrohype, Poetry, The Universe and Stuff with tags , , , on May 24, 2016 by telescoper

The English mathematician Charles Babbage, who designed and built the first programmable calculating machine, wrote to the (then) young poet Tennyson, whose poem The Vision of Sin he had recently read:


I like to think Babbage was having a laugh with Tennyson here, rather than expressing a view that poetry should be taken so literally, but you never know..

Anyway, I was reminded of the above letter by the much-hyped recent story of the alleged astronomical “dating” of this ancient poem (actually just a fragment) by Sappho:

Tonight I’ve watched
the moon and then
the Pleiades
go down

The night is now
half-gone; youth
goes; I am

in bed alone

It is a trivial piece of astronomical work to decuded that if the “Pleiades” does indeed refer to the constellation and “the night is now half-gone” means sometime around midnight, then the scene described in the fragment happened, if it happened at all, between January and March. However, as an excellent rebuttal piece by Darin Hayton points out, the assumptions needed to arrive at a specific date are all questionable.

More important, poetry is not and never has been intended for such superficial interpretation.  That goes for modern works, but is even more true for ancient verse. Who knows what the imagery and allusions in the text would have meant to an audience when it was composed, over 2500 years ago, but which are lost on a modern reader?

I’m not so much saddened that someone thought to study the possible astronomical interpretation an ancient text, even if they didn’t do a very thorough job of it. At least that means they are interested in poetry, although I doubt they were joking as Babbage may have been.

What does sadden me, however, is the ludicrous hype generated by the University of Texas publicity machine. There’s far too much of that about, and it’s getting worse.