Archive for the Astrohype Category

Negative Mass, Phlogiston and the State of Modern Cosmology

Posted in Astrohype, The Universe and Stuff with tags , , on December 7, 2018 by telescoper

A graphical representation of something or other.

I’ve noticed a modest amount of hype – much of it gibberish – going around about a paper published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter. Here is the abstract of the paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

For a skeptical commentary on this work, see here.

The idea of negative mass is no by no means new, of course. If you had askedk a seventeenth century scientist that question and the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning separated the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight implying an increase in mass of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative mass. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, `levity’. Nowadays we would probably say `anti-gravity.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

The standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of “dark energy”. We don’t know much about what this is, except that in order to make our current understanding work out it has to act like a source of anti-gravity. It does this by violating the strong energy condition of general relativity.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe.

A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

I’ve blogged before, with some levity of my own, about how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists.

Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe, as the paper that prompted this piece might be taken to suggest, the dark energy really is something like phlogiston. At least I prefer the name to quintessence. However, I think the author has missed a trick. I think to create a properly trendy cosmological theory he should include the concept of supersymmetry, according to which there should be a Fermionic counterpart of phlogiston called the phlogistino..


Hawking Points in the CMB Sky?

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , on October 30, 2018 by telescoper

As I wait in Cardiff Airport for a flight back to civilization, I thought I’d briefly mention a paper that appeared on the arXiv this summer. The abstract of this paper (by Daniel An, Krzysztof A. Meissner and Roger Penrose) reads as follows:

This paper presents powerful observational evidence of anomalous individual points in the very early universe that appear to be sources of vast amounts of energy, revealed as specific signals found in the CMB sky. Though seemingly problematic for cosmic inflation, the existence of such anomalous points is an implication of conformal cyclic cosmology (CCC), as what could be the Hawking points of the theory, these being the effects of the final Hawking evaporation of supermassive black holes in the aeon prior to ours. Although of extremely low temperature at emission, in CCC this radiation is enormously concentrated by the conformal compression of the entire future of the black hole, resulting in a single point at the crossover into our current aeon, with the emission of vast numbers of particles, whose effects we appear to be seeing as the observed anomalous points. Remarkably, the B-mode location found by BICEP 2 is at one of these anomalous points.

The presence of Roger Penrose in the author list of this paper is no doubt a factor that contributed to the substantial amount of hype surrounding it, but although he is the originator of the Conformal Cyclic Cosmology I suspect he didn’t have anything to do with the data analysis presented in the paper as, great mathematician though he is, data analysis is not his forte.

I have to admit that I am very skeptical of the claims made in this paper – as I was in the previous case of claims of a evidence in favour of the Penrose model. In that case the analysis was flawed because it did not properly calculate the probability of the claimed anomalies in the standard model of cosmology. Moreover, the addition of a reference to BICEP2 at the end of the abstract doesn’t strengthen the case. The detection claimed by BICEP2 was (a) in polarization not in temperature and (b) is now known to be consistent with galactic foregrounds.

I will, however, hold my tongue on these claims, at least for the time being. I have an MSc student at Maynooth who is going to try to reproduce the analysis (which is not trivial, as the description in the paper is extremely vague). Watch this space.

EDGES and Foregrounds

Posted in Astrohype, The Universe and Stuff with tags , , , on September 3, 2018 by telescoper

Earlier this year I wrote a brief post about paper by Bowman et al. from the EDGES experiment that had just come out in Nature reportining the detection of a flattened absorption profile in the sky-averaged radio spectrum, centred at a frequency of 78 megahertz, largely consistent with expectations for the 21-centimetre signal induced by early stars. It caused a lot of excitement at the time; see, e.g., here.
The key plot from the paper is this:

At the time I said that I wasn’t entirely convinced. Although the paper is very good at describing the EDGES experiment, it is far less convincing that all necessary foregrounds and systematics have been properly accounted for. There are many artefacts that could mimic the signal shown in the diagram.

I went on to say

If true, the signal is quite a lot larger than amplitude than standard models predict. That doesn’t mean that it must be wrong – I’ve never gone along with the saying `never trust an experimental result until it is confirmed by theory’ – but it’s way too early to claim that it proves that some new exotic physics is involved. The real explanation may be far more mundane.

There’s been a lot of media hype about this result – reminiscent of the BICEP bubble – and, while I agree that if it is true it is an extremely exciting result – I think it’s far too early to be certain of what it really represents. To my mind there’s a significant chance this could be a false cosmic dawn.

I gather the EDGES team is going to release its data publicly. That will be good, as independent checks of the data analysis would be very valuable.

Well, there’s a follow-up paper that I missed when it appeared on the arXiv in May the abstract of which reads:

We have re-analyzed the data in which Bowman et al. (2018) identified a feature that could be due to cosmological 21-cm line absorption in the intergalactic medium at redshift z~17. If we use exactly their procedures then we find almost identical results, but the fits imply either non-physical properties for the ionosphere or unexpected structure in the spectrum of foreground emission (or both). Furthermore we find that making reasonable changes to the analysis process, e.g., altering the description of the foregrounds or changing the range of frequencies included in the analysis, gives markedly different results for the properties of the absorption profile. We can in fact get what appears to be a satisfactory fit to the data without any absorption feature if there is a periodic feature with an amplitude of ~0.05 K present in the data. We believe that this calls into question the interpretation of these data as an unambiguous detection of the cosmological 21-cm absorption signature.

You can read the full paper here (PDF). I haven’t kept up with this particular story, so further comments/updates/references are welcome through the box below!

The Dark Matter of Astronomy Hype

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on April 16, 2018 by telescoper

Just before Easter (and, perhaps more significantly, just before April Fool’s Day) a paper by van Dokkum et al. was published in Nature with the title A Galaxy Lacking Dark Matter. As is often the case with scientific publications presented in Nature, the press machine kicked into action and stories about this mysterious galaxy appeared in print and online all round the world.

So what was the result? Here’s the abstract of the Nature paper:


Studies of galaxy surveys in the context of the cold dark matter paradigm have shown that the mass of the dark matter halo and the total stellar mass are coupled through a function that varies smoothly with mass. Their average ratio Mhalo/Mstars has a minimum of about 30 for galaxies with stellar masses near that of the Milky Way (approximately 5 × 1010 solar masses) and increases both towards lower masses and towards higher masses. The scatter in this relation is not well known; it is generally thought to be less than a factor of two for massive galaxies but much larger for dwarf galaxies. Here we report the radial velocities of ten luminous globular-cluster-like objects in the ultra-diffuse galaxy NGC1052–DF2, which has a stellar mass of approximately 2 × 108 solar masses. We infer that its velocity dispersion is less than 10.5 kilometres per second with 90 per cent confidence, and we determine from this that its total mass within a radius of 7.6 kiloparsecs is less than 3.4 × 108 solar masses. This implies that the ratio Mhalo/Mstars is of order unity (and consistent with zero), a factor of at least 400 lower than expected. NGC1052–DF2 demonstrates that dark matter is not always coupled with baryonic matter on galactic scales.


I had a quick look at the paper at the time and wasn’t very impressed by the quality of the data. To see why look at the main plot, a histogram formed from just ten observations (of globular clusters used as velocity tracers):

I didn’t have time to read the paper thoroughly before the Easter weekend,  but did draft a sceptical blog on the paper only to decide not to publish it as I thought it might be too inflammatory even by my standards! Suffice to say that I was unconvinced.

Anyway, it turns out I was far from the only astrophysicist to have doubts about this result; you can find a nice summary of the discussion on social media here and here. Fortunately, people more expert than me have found the time to look in more detail at the Dokkum et al. claim. There’s now a paper on the arXiv by Martin et al.

It was recently proposed that the globular cluster system of the very low surface-brightness galaxy NGC1052-DF2 is dynamically very cold, leading to the conclusion that this dwarf galaxy has little or no dark matter. Here, we show that a robust statistical measure of the velocity dispersion of the tracer globular clusters implies a mundane velocity dispersion and a poorly constrained mass-to-light ratio. Models that include the possibility that some of the tracers are field contaminants do not yield a more constraining inference. We derive only a weak constraint on the mass-to-light ratio of the system within the half-light radius or within the radius of the furthest tracer (M/L_V<8.1 at the 90-percent confidence level). Typical mass-to-light ratios measured for dwarf galaxies of the same stellar mass as NGC1052-DF2 are well within this limit. With this study, we emphasize the need to properly account for measurement uncertainties and to stay as close as possible to the data when determining dynamical masses from very small data sets of tracers.

More information about this system has been posted by Pieter van Dokkum on his website here.

Whatever turns out in the final analysis of NGC1052-DF2 it is undoubtedly an interesting system. It may indeed turn out to  have less dark matter than expected though I don’t think the evidence available right now warrants such an inference with such confidence. What worries me most however, is the way this result was presented in the media, with virtually no regard for the manifest statistical uncertainty inherent in the analysis. This kind of hype can be extremely damaging to science in general, and to explain why I’ll go off on a rant that I’ve indulged in a few times before on this blog.

A few years ago there was an interesting paper  (in Nature of all places), the opening paragraph of which reads:

The past few years have seen a slew of announcements of major discoveries in particle astrophysics and cosmology. The list includes faster-than-light neutrinos; dark-matter particles producing γ-rays; X-rays scattering off nuclei underground; and even evidence in the cosmic microwave background for gravitational waves caused by the rapid inflation of the early Universe. Most of these turned out to be false alarms; and in my view, that is the probable fate of the rest.

The piece went on to berate physicists for being too trigger-happy in claiming discoveries, the BICEP2 fiasco being a prime example. I agree that this is a problem, but it goes far beyond physics. In fact its endemic throughout science. A major cause of it is abuse of statistical reasoning.

Anyway, I thought I’d take the opportunity to re-iterate why I statistics and statistical reasoning are so important to science. In fact, I think they lie at the very core of the scientific method, although I am still surprised how few practising scientists are comfortable with even basic statistical language. A more important problem is the popular impression that science is about facts and absolute truths. It isn’t. It’s a <em>process</em>. In order to advance it has to question itself. Getting this message wrong – whether by error or on purpose -is immensely dangerous.

Statistical reasoning also applies to many facets of everyday life, including business, commerce, transport, the media, and politics. Probability even plays a role in personal relationships, though mostly at a subconscious level. It is a feature of everyday life that science and technology are deeply embedded in every aspect of what we do each day. Science has given us greater levels of comfort, better health care, and a plethora of labour-saving devices. It has also given us unprecedented ability to destroy the environment and each other, whether through accident or design.

Civilized societies face rigorous challenges in this century. We must confront the threat of climate change and forthcoming energy crises. We must find better ways of resolving conflicts peacefully lest nuclear or chemical or even conventional weapons lead us to global catastrophe. We must stop large-scale pollution or systematic destruction of the biosphere that nurtures us. And we must do all of these things without abandoning the many positive things that science has brought us. Abandoning science and rationality by retreating into religious or political fundamentalism would be a catastrophe for humanity.

Unfortunately, recent decades have seen a wholesale breakdown of trust between scientists and the public at large. This is due partly to the deliberate abuse of science for immoral purposes, and partly to the sheer carelessness with which various agencies have exploited scientific discoveries without proper evaluation of the risks involved. The abuse of statistical arguments have undoubtedly contributed to the suspicion with which many individuals view science.

There is an increasing alienation between scientists and the general public. Many fewer students enrol for courses in physics and chemistry than a a few decades ago. Fewer graduates mean fewer qualified science teachers in schools. This is a vicious cycle that threatens our future. It must be broken.

The danger is that the decreasing level of understanding of science in society means that knowledge (as well as its consequent power) becomes concentrated in the minds of a few individuals. This could have dire consequences for the future of our democracy. Even as things stand now, very few Members of Parliament are scientifically literate. How can we expect to control the application of science when the necessary understanding rests with an unelected “priesthood” that is hardly understood by, or represented in, our democratic institutions?

Very few journalists or television producers know enough about science to report sensibly on the latest discoveries or controversies. As a result, important matters that the public needs to know about do not appear at all in the media, or if they do it is in such a garbled fashion that they do more harm than good.

Years ago I used to listen to radio interviews with scientists on the Today programme on BBC Radio 4. I even did such an interview once. It is a deeply frustrating experience. The scientist usually starts by explaining what the discovery is about in the way a scientist should, with careful statements of what is assumed, how the data is interpreted, and what other possible interpretations might be and the likely sources of error. The interviewer then loses patience and asks for a yes or no answer. The scientist tries to continue, but is badgered. Either the interview ends as a row, or the scientist ends up stating a grossly oversimplified version of the story.

Some scientists offer the oversimplified version at the outset, of course, and these are the ones that contribute to the image of scientists as priests. Such individuals often believe in their theories in exactly the same way that some people believe religiously. Not with the conditional and possibly temporary belief that characterizes the scientific method, but with the unquestioning fervour of an unthinking zealot. This approach may pay off for the individual in the short term, in popular esteem and media recognition – but when it goes wrong it is science as a whole that suffers. When a result that has been proclaimed certain is later shown to be false, the result is widespread disillusionment.

The worst example of this tendency that I can think of is the constant use of the phrase “Mind of God” by theoretical physicists to describe fundamental theories. This is not only meaningless but also damaging. As scientists we should know better than to use it. Our theories do not represent absolute truths: they are just the best we can do with the available data and the limited powers of the human mind. We believe in our theories, but only to the extent that we need to accept working hypotheses in order to make progress. Our approach is pragmatic rather than idealistic. We should be humble and avoid making extravagant claims that can’t be justified either theoretically or experimentally.

The more that people get used to the image of “scientist as priest” the more dissatisfied they are with real science. Most of the questions asked of scientists simply can’t be answered with “yes” or “no”. This leaves many with the impression that science is very vague and subjective. The public also tend to lose faith in science when it is unable to come up with quick answers. Science is a process, a way of looking at problems not a list of ready-made answers to impossible problems. Of course it is sometimes vague, but I think it is vague in a rational way and that’s what makes it worthwhile. It is also the reason why science has led to so many objectively measurable advances in our understanding of the World.

I don’t have any easy answers to the question of how to cure this malaise, but do have a few suggestions. It would be easy for a scientist such as myself to blame everything on the media and the education system, but in fact I think the responsibility lies mainly with ourselves. We are usually so obsessed with our own research, and the need to publish specialist papers by the lorry-load in order to advance our own careers that we usually spend very little time explaining what we do to the public or why.

I think every working scientist in the country should be required to spend at least 10% of their time working in schools or with the general media on “outreach”, including writing blogs like this. People in my field – astronomers and cosmologists – do this quite a lot, but these are areas where the public has some empathy with what we do. If only biologists, chemists, nuclear physicists and the rest were viewed in such a friendly light. Doing this sort of thing is not easy, especially when it comes to saying something on the radio that the interviewer does not want to hear. Media training for scientists has been a welcome recent innovation for some branches of science, but most of my colleagues have never had any help at all in this direction.

The second thing that must be done is to improve the dire state of science education in schools. Over the last two decades the national curriculum for British schools has been dumbed down to the point of absurdity. Pupils that leave school at 18 having taken “Advanced Level” physics do so with no useful knowledge of physics at all, even if they have obtained the highest grade. I do not at all blame the students for this; they can only do what they are asked to do. It’s all the fault of the educationalists, who have done the best they can for a long time to convince our young people that science is too hard for them. Science can be difficult, of course, and not everyone will be able to make a career out of it. But that doesn’t mean that it should not be taught properly to those that can take it in. If some students find it is not for them, then so be it. We don’t everyone to be a scientist, but we do need many more people to understand how science really works.

I realise I must sound very gloomy about this, but I do think there are good prospects that the gap between science and society may gradually be healed. The fact that the public distrust scientists leads many of them to question us, which is a very good thing. They should question us and we should be prepared to answer them. If they ask us why, we should be prepared to give reasons. If enough scientists engage in this process then what will emerge is and understanding of the enduring value of science. I don’t just mean through the DVD players and computer games science has given us, but through its cultural impact. It is part of human nature to question our place in the Universe, so science is part of what we are. It gives us purpose. But it also shows us a way of living our lives. Except for a few individuals, the scientific community is tolerant, open, internationally-minded, and imbued with a philosophy of cooperation. It values reason and looks to the future rather than the past. Like anyone else, scientists will always make mistakes, but we can always learn from them. The logic of science may not be infallible, but it’s probably the best logic there is in a world so filled with uncertainty.




Cosmic Dawn?

Posted in Astrohype, The Universe and Stuff on March 2, 2018 by telescoper

I’m still in London hoping to get a train back to Cardiff at some point this morning – as I write they are running, but with a reduced service – so I thought I’d make a quick comment on a big piece of astrophysics news. There’s a paper out in this week’s Nature, the abstract of which is

After stars formed in the early Universe, their ultraviolet light is expected, eventually, to have penetrated the primordial hydrogen gas and altered the excitation state of its 21-centimetre hyperfine line. This alteration would cause the gas to absorb photons from the cosmic microwave background, producing a spectral distortion that should be observable today at radio frequencies of less than 200 megahertz1. Here we report the detection of a flattened absorption profile in the sky-averaged radio spectrum, which is centred at a frequency of 78 megahertz and has a best-fitting full-width at half-maximum of 19 megahertz and an amplitude of 0.5 kelvin. The profile is largely consistent with expectations for the 21-centimetre signal induced by early stars; however, the best-fitting amplitude of the profile is more than a factor of two greater than the largest predictions2. This discrepancy suggests that either the primordial gas was much colder than expected or the background radiation temperature was hotter than expected. Astrophysical phenomena (such as radiation from stars and stellar remnants) are unlikely to account for this discrepancy; of the proposed extensions to the standard model of cosmology and particle physics, only cooling of the gas as a result of interactions between dark matter and baryons seems to explain the observed amplitude3. The low-frequency edge of the observed profile indicates that stars existed and had produced a background of Lyman-α photons by 180 million years after the Big Bang. The high-frequency edge indicates that the gas was heated to above the radiation temperature less than 100 million years later.

The key plot from the paper is this:

I’ve read the paper and, as was the case with the BICEP2 announcement a few years ago, I’m not entirely convinced. I think the paper is very good at describing the EDGES experiment, but far less convincing that all necessary foregrounds and systematics have been properly accounted for. There are many artefacts that could mimic the signal shown in the diagram.

If true, the signal is quite a lot larger than amplitude than standard models predict. That doesn’t mean that it must be wrong – I’ve never gone along with the saying `never trust an experimental result until it is confirmed by theory’ – but it’s way too early to claim that it proves that some new exotic physics is involved. The real explanation may be far more mundane.

There’s been a lot of media hype about this result – reminiscent of the BICEP bubble – and, while I agree that if it is true it is an extremely exciting result – I think it’s far too early to be certain of what it really represents. To my mind there’s a significant chance this could be a false cosmic dawn.

I gather the EDGES team is going to release its data publicly. That will be good, as independent checks of the data analysis would be very valuable.

I’m sorry I haven’t got time for a more detailed post on this, but I have to get my stuff together and head for the train. Comments from experts and non-experts are, as usual, most welcome via the comments box.

A Spot of Hype

Posted in Astrohype, The Universe and Stuff with tags , , on May 19, 2017 by telescoper

A few weeks ago a paper came out in Monthly Notices of the Royal Astronomical Society (accompanied by a press release from the Royal Astronomical Society) about a possible explanation for the now-famous cold spot in the cosmic microwave background sky that I’ve blogged about on a number of occasions:

If the standard model of cosmology is correct then a spot as cold as this and as large as this is quite a rare event, occurring only about 1% of the time in sky patterns simulated using the model assumptions. One possible explanation of this ( which I’ve discussed before) is that this feature is generated not by density fluctuations in the primordial plasma (which are thought to cause the variation of temperature of the cosmic microwave background across the sky), but by something much more recent in the evolution of the Universe, namely a local large void in the matter distribution which would cause a temperature fluctuation by the Sachs-Wolfe Effect.

The latest paper by Mackenzie et al. (which can be found on the arXiv here) pours enough cold water on that explanation to drown it completely and wash away the corpse. A detailed survey of the galaxy distribution in the direction of the cold spot shows no evidence for an under-density deep enough to affect the CMB. But if the cold spot is not caused by a supervoid, what is it caused by?

Right at the end of the paper the authors discuss a few alternatives,  some of them invoking `exotic’ physics early in the Universe’s history. One such possibility arises if we live in an inflationary Universe in which our observable universe is just one of a (perhaps infinite) collection of bubble-like domains which are now causally disconnected. If our bubble collided with another bubble early on then it might distort the cosmic microwave background in our bubble, in much the same way that a collision with another car might damage your car’s bodywork.

For the record I’ve always found this explanation completely implausible. A simple energy argument suggests that if such a collision were to occur between two inflationary bubbles, it is much more likely to involve their mutual destruction than a small dint. In other words, both cars would be written off.

Nevertheless, the press have seized on this possible explanation, got hold of the wrong end of the stick and proceeded to beat about the bush with it. See, for example, the Independent headline: `Mysterious ‘cold spot’ in space could be proof of a parallel universe, scientists say’.

No. Actually, scientists don’t say that. In particular, the authors of the paper don’t say it either. In fact they don’t mention `proof’ at all. It’s pure hype by the journalists. I don’t blame Mackenzie et al, nor the RAS Press team. It’s just silly reporting.

Anyway, I’m sure I can hear you asking what I think is the origin of the cold spot. Well, the simple answer is that I don’t know for sure. The more complicated answer is that I strongly suspect that at least part of the explanation for why this patch of sky looks as cold as it does is tied up with another anomalous feature of the CMB, i.e. the hemispherical power asymmetry.

In the standard cosmological model the CMB fluctuations are statistically isotropic, which means the variance is the same everywhere on the sky. In observed maps of the microwave background, however, there is a slight but statistically significant variation of the variance, in such a way that the half of the sky that includes the cold spot has larger variance than the opposite half.

My suspicion is that the hemispherical power asymmetry is either an instrumental artifact (i.e. a systematic of the measurement) or is generated by improper substraction of foreground signals (from our galaxy or even from within the Solar system). Whatever causes it, this effect could well modulate the CMB temperature in such a way that it makes the cold spot look more impressive than it actually is. It seems to me that the cold spot could be perfectly consistent with the standard model if this hemispherical anomaly is taken into account. This may not be `exotic’ or `exciting’ or feed the current fetish for the multiverse, but I think it’s the simplest and most probable explanation.

Call me old-fashioned.

P.S. You might like to read this article by Alfredo Carpineti which is similarly sceptical!

Declining Rotation Curves at High Redshift?

Posted in Astrohype, The Universe and Stuff on March 20, 2017 by telescoper

I was thinking of doing my own blog about a recent high-profile result published in Nature by Genzel et al. (and on the arXiv here), but then I see that Stacy McGaugh has already done a much more thorough and better-informed job than I would have done, so instead of trying to emulate his effort I’ll just direct you to his piece.

A recent paper in Nature by Genzel et al. reports declining rotation curves for high redshift galaxies. I have been getting a lot of questions about this result, which would be very important if true. So I thought I’d share a few thoughts here. Nature is a highly reputable journal – in most fields of […]

via Declining Rotation Curves at High Redshift? — Triton Station

P.S. Don’t ask me why WordPress can’t render the figures properly.