Archive for the Astrohype Category

Hawking Points in the CMB Sky?

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , on October 30, 2018 by telescoper

As I wait in Cardiff Airport for a flight back to civilization, I thought I’d briefly mention a paper that appeared on the arXiv this summer. The abstract of this paper (by Daniel An, Krzysztof A. Meissner and Roger Penrose) reads as follows:

This paper presents powerful observational evidence of anomalous individual points in the very early universe that appear to be sources of vast amounts of energy, revealed as specific signals found in the CMB sky. Though seemingly problematic for cosmic inflation, the existence of such anomalous points is an implication of conformal cyclic cosmology (CCC), as what could be the Hawking points of the theory, these being the effects of the final Hawking evaporation of supermassive black holes in the aeon prior to ours. Although of extremely low temperature at emission, in CCC this radiation is enormously concentrated by the conformal compression of the entire future of the black hole, resulting in a single point at the crossover into our current aeon, with the emission of vast numbers of particles, whose effects we appear to be seeing as the observed anomalous points. Remarkably, the B-mode location found by BICEP 2 is at one of these anomalous points.

The presence of Roger Penrose in the author list of this paper is no doubt a factor that contributed to the substantial amount of hype surrounding it, but although he is the originator of the Conformal Cyclic Cosmology I suspect he didn’t have anything to do with the data analysis presented in the paper as, great mathematician though he is, data analysis is not his forte.

I have to admit that I am very skeptical of the claims made in this paper – as I was in the previous case of claims of a evidence in favour of the Penrose model. In that case the analysis was flawed because it did not properly calculate the probability of the claimed anomalies in the standard model of cosmology. Moreover, the addition of a reference to BICEP2 at the end of the abstract doesn’t strengthen the case. The detection claimed by BICEP2 was (a) in polarization not in temperature and (b) is now known to be consistent with galactic foregrounds.

I will, however, hold my tongue on these claims, at least for the time being. I have an MSc student at Maynooth who is going to try to reproduce the analysis (which is not trivial, as the description in the paper is extremely vague). Watch this space.

Advertisements

EDGES and Foregrounds

Posted in Astrohype, The Universe and Stuff with tags , , , on September 3, 2018 by telescoper

Earlier this year I wrote a brief post about paper by Bowman et al. from the EDGES experiment that had just come out in Nature reportining the detection of a flattened absorption profile in the sky-averaged radio spectrum, centred at a frequency of 78 megahertz, largely consistent with expectations for the 21-centimetre signal induced by early stars. It caused a lot of excitement at the time; see, e.g., here.
The key plot from the paper is this:

At the time I said that I wasn’t entirely convinced. Although the paper is very good at describing the EDGES experiment, it is far less convincing that all necessary foregrounds and systematics have been properly accounted for. There are many artefacts that could mimic the signal shown in the diagram.

I went on to say

If true, the signal is quite a lot larger than amplitude than standard models predict. That doesn’t mean that it must be wrong – I’ve never gone along with the saying `never trust an experimental result until it is confirmed by theory’ – but it’s way too early to claim that it proves that some new exotic physics is involved. The real explanation may be far more mundane.

There’s been a lot of media hype about this result – reminiscent of the BICEP bubble – and, while I agree that if it is true it is an extremely exciting result – I think it’s far too early to be certain of what it really represents. To my mind there’s a significant chance this could be a false cosmic dawn.

I gather the EDGES team is going to release its data publicly. That will be good, as independent checks of the data analysis would be very valuable.

Well, there’s a follow-up paper that I missed when it appeared on the arXiv in May the abstract of which reads:

We have re-analyzed the data in which Bowman et al. (2018) identified a feature that could be due to cosmological 21-cm line absorption in the intergalactic medium at redshift z~17. If we use exactly their procedures then we find almost identical results, but the fits imply either non-physical properties for the ionosphere or unexpected structure in the spectrum of foreground emission (or both). Furthermore we find that making reasonable changes to the analysis process, e.g., altering the description of the foregrounds or changing the range of frequencies included in the analysis, gives markedly different results for the properties of the absorption profile. We can in fact get what appears to be a satisfactory fit to the data without any absorption feature if there is a periodic feature with an amplitude of ~0.05 K present in the data. We believe that this calls into question the interpretation of these data as an unambiguous detection of the cosmological 21-cm absorption signature.

You can read the full paper here (PDF). I haven’t kept up with this particular story, so further comments/updates/references are welcome through the box below!

The Dark Matter of Astronomy Hype

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on April 16, 2018 by telescoper

Just before Easter (and, perhaps more significantly, just before April Fool’s Day) a paper by van Dokkum et al. was published in Nature with the title A Galaxy Lacking Dark Matter. As is often the case with scientific publications presented in Nature, the press machine kicked into action and stories about this mysterious galaxy appeared in print and online all round the world.

So what was the result? Here’s the abstract of the Nature paper:

 

Studies of galaxy surveys in the context of the cold dark matter paradigm have shown that the mass of the dark matter halo and the total stellar mass are coupled through a function that varies smoothly with mass. Their average ratio Mhalo/Mstars has a minimum of about 30 for galaxies with stellar masses near that of the Milky Way (approximately 5 × 1010 solar masses) and increases both towards lower masses and towards higher masses. The scatter in this relation is not well known; it is generally thought to be less than a factor of two for massive galaxies but much larger for dwarf galaxies. Here we report the radial velocities of ten luminous globular-cluster-like objects in the ultra-diffuse galaxy NGC1052–DF2, which has a stellar mass of approximately 2 × 108 solar masses. We infer that its velocity dispersion is less than 10.5 kilometres per second with 90 per cent confidence, and we determine from this that its total mass within a radius of 7.6 kiloparsecs is less than 3.4 × 108 solar masses. This implies that the ratio Mhalo/Mstars is of order unity (and consistent with zero), a factor of at least 400 lower than expected. NGC1052–DF2 demonstrates that dark matter is not always coupled with baryonic matter on galactic scales.

 

I had a quick look at the paper at the time and wasn’t very impressed by the quality of the data. To see why look at the main plot, a histogram formed from just ten observations (of globular clusters used as velocity tracers):

I didn’t have time to read the paper thoroughly before the Easter weekend,  but did draft a sceptical blog on the paper only to decide not to publish it as I thought it might be too inflammatory even by my standards! Suffice to say that I was unconvinced.

Anyway, it turns out I was far from the only astrophysicist to have doubts about this result; you can find a nice summary of the discussion on social media here and here. Fortunately, people more expert than me have found the time to look in more detail at the Dokkum et al. claim. There’s now a paper on the arXiv by Martin et al.

It was recently proposed that the globular cluster system of the very low surface-brightness galaxy NGC1052-DF2 is dynamically very cold, leading to the conclusion that this dwarf galaxy has little or no dark matter. Here, we show that a robust statistical measure of the velocity dispersion of the tracer globular clusters implies a mundane velocity dispersion and a poorly constrained mass-to-light ratio. Models that include the possibility that some of the tracers are field contaminants do not yield a more constraining inference. We derive only a weak constraint on the mass-to-light ratio of the system within the half-light radius or within the radius of the furthest tracer (M/L_V<8.1 at the 90-percent confidence level). Typical mass-to-light ratios measured for dwarf galaxies of the same stellar mass as NGC1052-DF2 are well within this limit. With this study, we emphasize the need to properly account for measurement uncertainties and to stay as close as possible to the data when determining dynamical masses from very small data sets of tracers.

More information about this system has been posted by Pieter van Dokkum on his website here.

Whatever turns out in the final analysis of NGC1052-DF2 it is undoubtedly an interesting system. It may indeed turn out to  have less dark matter than expected though I don’t think the evidence available right now warrants such an inference with such confidence. What worries me most however, is the way this result was presented in the media, with virtually no regard for the manifest statistical uncertainty inherent in the analysis. This kind of hype can be extremely damaging to science in general, and to explain why I’ll go off on a rant that I’ve indulged in a few times before on this blog.

A few years ago there was an interesting paper  (in Nature of all places), the opening paragraph of which reads:

The past few years have seen a slew of announcements of major discoveries in particle astrophysics and cosmology. The list includes faster-than-light neutrinos; dark-matter particles producing γ-rays; X-rays scattering off nuclei underground; and even evidence in the cosmic microwave background for gravitational waves caused by the rapid inflation of the early Universe. Most of these turned out to be false alarms; and in my view, that is the probable fate of the rest.

The piece went on to berate physicists for being too trigger-happy in claiming discoveries, the BICEP2 fiasco being a prime example. I agree that this is a problem, but it goes far beyond physics. In fact its endemic throughout science. A major cause of it is abuse of statistical reasoning.

Anyway, I thought I’d take the opportunity to re-iterate why I statistics and statistical reasoning are so important to science. In fact, I think they lie at the very core of the scientific method, although I am still surprised how few practising scientists are comfortable with even basic statistical language. A more important problem is the popular impression that science is about facts and absolute truths. It isn’t. It’s a <em>process</em>. In order to advance it has to question itself. Getting this message wrong – whether by error or on purpose -is immensely dangerous.

Statistical reasoning also applies to many facets of everyday life, including business, commerce, transport, the media, and politics. Probability even plays a role in personal relationships, though mostly at a subconscious level. It is a feature of everyday life that science and technology are deeply embedded in every aspect of what we do each day. Science has given us greater levels of comfort, better health care, and a plethora of labour-saving devices. It has also given us unprecedented ability to destroy the environment and each other, whether through accident or design.

Civilized societies face rigorous challenges in this century. We must confront the threat of climate change and forthcoming energy crises. We must find better ways of resolving conflicts peacefully lest nuclear or chemical or even conventional weapons lead us to global catastrophe. We must stop large-scale pollution or systematic destruction of the biosphere that nurtures us. And we must do all of these things without abandoning the many positive things that science has brought us. Abandoning science and rationality by retreating into religious or political fundamentalism would be a catastrophe for humanity.

Unfortunately, recent decades have seen a wholesale breakdown of trust between scientists and the public at large. This is due partly to the deliberate abuse of science for immoral purposes, and partly to the sheer carelessness with which various agencies have exploited scientific discoveries without proper evaluation of the risks involved. The abuse of statistical arguments have undoubtedly contributed to the suspicion with which many individuals view science.

There is an increasing alienation between scientists and the general public. Many fewer students enrol for courses in physics and chemistry than a a few decades ago. Fewer graduates mean fewer qualified science teachers in schools. This is a vicious cycle that threatens our future. It must be broken.

The danger is that the decreasing level of understanding of science in society means that knowledge (as well as its consequent power) becomes concentrated in the minds of a few individuals. This could have dire consequences for the future of our democracy. Even as things stand now, very few Members of Parliament are scientifically literate. How can we expect to control the application of science when the necessary understanding rests with an unelected “priesthood” that is hardly understood by, or represented in, our democratic institutions?

Very few journalists or television producers know enough about science to report sensibly on the latest discoveries or controversies. As a result, important matters that the public needs to know about do not appear at all in the media, or if they do it is in such a garbled fashion that they do more harm than good.

Years ago I used to listen to radio interviews with scientists on the Today programme on BBC Radio 4. I even did such an interview once. It is a deeply frustrating experience. The scientist usually starts by explaining what the discovery is about in the way a scientist should, with careful statements of what is assumed, how the data is interpreted, and what other possible interpretations might be and the likely sources of error. The interviewer then loses patience and asks for a yes or no answer. The scientist tries to continue, but is badgered. Either the interview ends as a row, or the scientist ends up stating a grossly oversimplified version of the story.

Some scientists offer the oversimplified version at the outset, of course, and these are the ones that contribute to the image of scientists as priests. Such individuals often believe in their theories in exactly the same way that some people believe religiously. Not with the conditional and possibly temporary belief that characterizes the scientific method, but with the unquestioning fervour of an unthinking zealot. This approach may pay off for the individual in the short term, in popular esteem and media recognition – but when it goes wrong it is science as a whole that suffers. When a result that has been proclaimed certain is later shown to be false, the result is widespread disillusionment.

The worst example of this tendency that I can think of is the constant use of the phrase “Mind of God” by theoretical physicists to describe fundamental theories. This is not only meaningless but also damaging. As scientists we should know better than to use it. Our theories do not represent absolute truths: they are just the best we can do with the available data and the limited powers of the human mind. We believe in our theories, but only to the extent that we need to accept working hypotheses in order to make progress. Our approach is pragmatic rather than idealistic. We should be humble and avoid making extravagant claims that can’t be justified either theoretically or experimentally.

The more that people get used to the image of “scientist as priest” the more dissatisfied they are with real science. Most of the questions asked of scientists simply can’t be answered with “yes” or “no”. This leaves many with the impression that science is very vague and subjective. The public also tend to lose faith in science when it is unable to come up with quick answers. Science is a process, a way of looking at problems not a list of ready-made answers to impossible problems. Of course it is sometimes vague, but I think it is vague in a rational way and that’s what makes it worthwhile. It is also the reason why science has led to so many objectively measurable advances in our understanding of the World.

I don’t have any easy answers to the question of how to cure this malaise, but do have a few suggestions. It would be easy for a scientist such as myself to blame everything on the media and the education system, but in fact I think the responsibility lies mainly with ourselves. We are usually so obsessed with our own research, and the need to publish specialist papers by the lorry-load in order to advance our own careers that we usually spend very little time explaining what we do to the public or why.

I think every working scientist in the country should be required to spend at least 10% of their time working in schools or with the general media on “outreach”, including writing blogs like this. People in my field – astronomers and cosmologists – do this quite a lot, but these are areas where the public has some empathy with what we do. If only biologists, chemists, nuclear physicists and the rest were viewed in such a friendly light. Doing this sort of thing is not easy, especially when it comes to saying something on the radio that the interviewer does not want to hear. Media training for scientists has been a welcome recent innovation for some branches of science, but most of my colleagues have never had any help at all in this direction.

The second thing that must be done is to improve the dire state of science education in schools. Over the last two decades the national curriculum for British schools has been dumbed down to the point of absurdity. Pupils that leave school at 18 having taken “Advanced Level” physics do so with no useful knowledge of physics at all, even if they have obtained the highest grade. I do not at all blame the students for this; they can only do what they are asked to do. It’s all the fault of the educationalists, who have done the best they can for a long time to convince our young people that science is too hard for them. Science can be difficult, of course, and not everyone will be able to make a career out of it. But that doesn’t mean that it should not be taught properly to those that can take it in. If some students find it is not for them, then so be it. We don’t everyone to be a scientist, but we do need many more people to understand how science really works.

I realise I must sound very gloomy about this, but I do think there are good prospects that the gap between science and society may gradually be healed. The fact that the public distrust scientists leads many of them to question us, which is a very good thing. They should question us and we should be prepared to answer them. If they ask us why, we should be prepared to give reasons. If enough scientists engage in this process then what will emerge is and understanding of the enduring value of science. I don’t just mean through the DVD players and computer games science has given us, but through its cultural impact. It is part of human nature to question our place in the Universe, so science is part of what we are. It gives us purpose. But it also shows us a way of living our lives. Except for a few individuals, the scientific community is tolerant, open, internationally-minded, and imbued with a philosophy of cooperation. It values reason and looks to the future rather than the past. Like anyone else, scientists will always make mistakes, but we can always learn from them. The logic of science may not be infallible, but it’s probably the best logic there is in a world so filled with uncertainty.

 

 

 

Cosmic Dawn?

Posted in Astrohype, The Universe and Stuff on March 2, 2018 by telescoper

I’m still in London hoping to get a train back to Cardiff at some point this morning – as I write they are running, but with a reduced service – so I thought I’d make a quick comment on a big piece of astrophysics news. There’s a paper out in this week’s Nature, the abstract of which is

After stars formed in the early Universe, their ultraviolet light is expected, eventually, to have penetrated the primordial hydrogen gas and altered the excitation state of its 21-centimetre hyperfine line. This alteration would cause the gas to absorb photons from the cosmic microwave background, producing a spectral distortion that should be observable today at radio frequencies of less than 200 megahertz1. Here we report the detection of a flattened absorption profile in the sky-averaged radio spectrum, which is centred at a frequency of 78 megahertz and has a best-fitting full-width at half-maximum of 19 megahertz and an amplitude of 0.5 kelvin. The profile is largely consistent with expectations for the 21-centimetre signal induced by early stars; however, the best-fitting amplitude of the profile is more than a factor of two greater than the largest predictions2. This discrepancy suggests that either the primordial gas was much colder than expected or the background radiation temperature was hotter than expected. Astrophysical phenomena (such as radiation from stars and stellar remnants) are unlikely to account for this discrepancy; of the proposed extensions to the standard model of cosmology and particle physics, only cooling of the gas as a result of interactions between dark matter and baryons seems to explain the observed amplitude3. The low-frequency edge of the observed profile indicates that stars existed and had produced a background of Lyman-α photons by 180 million years after the Big Bang. The high-frequency edge indicates that the gas was heated to above the radiation temperature less than 100 million years later.

The key plot from the paper is this:

I’ve read the paper and, as was the case with the BICEP2 announcement a few years ago, I’m not entirely convinced. I think the paper is very good at describing the EDGES experiment, but far less convincing that all necessary foregrounds and systematics have been properly accounted for. There are many artefacts that could mimic the signal shown in the diagram.

If true, the signal is quite a lot larger than amplitude than standard models predict. That doesn’t mean that it must be wrong – I’ve never gone along with the saying `never trust an experimental result until it is confirmed by theory’ – but it’s way too early to claim that it proves that some new exotic physics is involved. The real explanation may be far more mundane.

There’s been a lot of media hype about this result – reminiscent of the BICEP bubble – and, while I agree that if it is true it is an extremely exciting result – I think it’s far too early to be certain of what it really represents. To my mind there’s a significant chance this could be a false cosmic dawn.

I gather the EDGES team is going to release its data publicly. That will be good, as independent checks of the data analysis would be very valuable.

I’m sorry I haven’t got time for a more detailed post on this, but I have to get my stuff together and head for the train. Comments from experts and non-experts are, as usual, most welcome via the comments box.

A Spot of Hype

Posted in Astrohype, The Universe and Stuff with tags , , on May 19, 2017 by telescoper

A few weeks ago a paper came out in Monthly Notices of the Royal Astronomical Society (accompanied by a press release from the Royal Astronomical Society) about a possible explanation for the now-famous cold spot in the cosmic microwave background sky that I’ve blogged about on a number of occasions:

If the standard model of cosmology is correct then a spot as cold as this and as large as this is quite a rare event, occurring only about 1% of the time in sky patterns simulated using the model assumptions. One possible explanation of this ( which I’ve discussed before) is that this feature is generated not by density fluctuations in the primordial plasma (which are thought to cause the variation of temperature of the cosmic microwave background across the sky), but by something much more recent in the evolution of the Universe, namely a local large void in the matter distribution which would cause a temperature fluctuation by the Sachs-Wolfe Effect.

The latest paper by Mackenzie et al. (which can be found on the arXiv here) pours enough cold water on that explanation to drown it completely and wash away the corpse. A detailed survey of the galaxy distribution in the direction of the cold spot shows no evidence for an under-density deep enough to affect the CMB. But if the cold spot is not caused by a supervoid, what is it caused by?

Right at the end of the paper the authors discuss a few alternatives,  some of them invoking `exotic’ physics early in the Universe’s history. One such possibility arises if we live in an inflationary Universe in which our observable universe is just one of a (perhaps infinite) collection of bubble-like domains which are now causally disconnected. If our bubble collided with another bubble early on then it might distort the cosmic microwave background in our bubble, in much the same way that a collision with another car might damage your car’s bodywork.

For the record I’ve always found this explanation completely implausible. A simple energy argument suggests that if such a collision were to occur between two inflationary bubbles, it is much more likely to involve their mutual destruction than a small dint. In other words, both cars would be written off.

Nevertheless, the press have seized on this possible explanation, got hold of the wrong end of the stick and proceeded to beat about the bush with it. See, for example, the Independent headline: `Mysterious ‘cold spot’ in space could be proof of a parallel universe, scientists say’.

No. Actually, scientists don’t say that. In particular, the authors of the paper don’t say it either. In fact they don’t mention `proof’ at all. It’s pure hype by the journalists. I don’t blame Mackenzie et al, nor the RAS Press team. It’s just silly reporting.

Anyway, I’m sure I can hear you asking what I think is the origin of the cold spot. Well, the simple answer is that I don’t know for sure. The more complicated answer is that I strongly suspect that at least part of the explanation for why this patch of sky looks as cold as it does is tied up with another anomalous feature of the CMB, i.e. the hemispherical power asymmetry.

In the standard cosmological model the CMB fluctuations are statistically isotropic, which means the variance is the same everywhere on the sky. In observed maps of the microwave background, however, there is a slight but statistically significant variation of the variance, in such a way that the half of the sky that includes the cold spot has larger variance than the opposite half.

My suspicion is that the hemispherical power asymmetry is either an instrumental artifact (i.e. a systematic of the measurement) or is generated by improper substraction of foreground signals (from our galaxy or even from within the Solar system). Whatever causes it, this effect could well modulate the CMB temperature in such a way that it makes the cold spot look more impressive than it actually is. It seems to me that the cold spot could be perfectly consistent with the standard model if this hemispherical anomaly is taken into account. This may not be `exotic’ or `exciting’ or feed the current fetish for the multiverse, but I think it’s the simplest and most probable explanation.

Call me old-fashioned.

P.S. You might like to read this article by Alfredo Carpineti which is similarly sceptical!

Declining Rotation Curves at High Redshift?

Posted in Astrohype, The Universe and Stuff on March 20, 2017 by telescoper

I was thinking of doing my own blog about a recent high-profile result published in Nature by Genzel et al. (and on the arXiv here), but then I see that Stacy McGaugh has already done a much more thorough and better-informed job than I would have done, so instead of trying to emulate his effort I’ll just direct you to his piece.

A recent paper in Nature by Genzel et al. reports declining rotation curves for high redshift galaxies. I have been getting a lot of questions about this result, which would be very important if true. So I thought I’d share a few thoughts here. Nature is a highly reputable journal – in most fields of […]

via Declining Rotation Curves at High Redshift? — Triton Station

P.S. Don’t ask me why WordPress can’t render the figures properly.

Fake News of the Holographic Universe

Posted in Astrohype, The Universe and Stuff with tags , , , , , , on February 1, 2017 by telescoper

It has been a very busy day today but I thought I’d grab a few minutes to rant about something inspired by a cosmological topic but that I’m afraid is symptomatic of malaise that extends far wider than fundamental science.

The other day I found a news item with the title Study reveals substantial evidence of holographic universe. You can find a fairly detailed discussion of the holographic principle here, but the name is fairly self-explanatory: the familiar hologram is a two-dimensional object that contains enough information to reconstruct a three-dimensional object. The holographic principle extends this to the idea that information pertaining to a higher-dimensional space may reside on a lower-dimensional boundary of that space. It’s an idea which has gained some traction in the context of the black hole information paradox, for example.

There are people far more knowledgeable about the holographic principle than me, but naturally what grabbed my attention was the title of the news item: Study reveals substantial evidence of holographic universe. That got me really excited, as I wasn’t previously aware that there was any observed property of the Universe that showed any unambiguous evidence for the holographic interpretation or indeed that models based on this model could describe the available data better than the standard ΛCDM cosmological model. Naturally I went to the original paper on the arXiv by Niayesh Ashfordi et al. to which the news item relates. Here is the abstract:

We test a class of holographic models for the very early universe against cosmological observations and find that they are competitive to the standard ΛCDM model of cosmology. These models are based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and while they predict a different power spectrum from the standard power-law used in ΛCDM, they still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that ΛCDM does a better job globally, while the holographic models provide a (marginally) better fit to data without very low multipoles (i.e. l≲30), where the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT’s can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and potentially explain its apparent anomalies.

The third sentence (highlighted) states explicitly that according to the Bayesian evidence (see here for a review of this) the holographic models do not fit the data even as well as the standard model (unless some of the CMB measurements are excluded, and then they’re only slightly better)

I think the holographic principle is a very interesting idea and it may indeed at some point prove to provide a deeper understanding of our universe than our current models. Nevertheless it seems clear to me that the title of this news article is extremely misleading. Current observations do not really provide any evidence in favour of the holographic models, and certainly not “substantial evidence”.

The wider point should be obvious. We scientists rightly bemoan the era of “fake news”. We like to think that we occupy the high ground, by rigorously weighing up the evidence, drawing conclusions as objectively as possible, and reporting our findings with a balanced view of the uncertainties and caveats. That’s what we should be doing. Unless we do that we’re not communicating science but engaged in propaganda, and that’s a very dangerous game to play as it endangers the already fragile trust the public place in science.

The authors of the paper are not entirely to blame as they did not write the piece that kicked off this rant, which seems to have been produced by the press office at the University of Southampton, but they should not have consented to it being released with such a misleading title.