Archive for Dark Energy

Hubble Tension: an “Alternative” View?

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on July 25, 2019 by telescoper

There was a new paper last week on the arXiv by Sunny Vagnozzi about the Hubble constant controversy (see this blog passim). I was going to refrain from commenting but I see that one of the bloggers I follow has posted about it so I guess a brief item would not be out of order.

Here is the abstract of the Vagnozzi paper:

I posted this picture last week which is relevant to the discussion:

The point is that if you allow the equation of state parameter w to vary from the value of w=-1 that it has in the standard cosmology then you get a better fit. However, it is one of the features of Bayesian inference that if you introduce a new free parameter then you have to assign a prior probability over the space of values that parameter could hold. That prior penalty is carried through to the posterior probability. Unless the new model fits observational data significantly better than the old one, this prior penalty will lead to the new model being disfavoured. This is the Bayesian statement of Ockham’s Razor.

The Vagnozzi paper represents a statement of this in the context of the Hubble tension. If a new floating parameter w is introduced the data prefer a value less than -1 (as demonstrated in the figure) but on posterior probability grounds the resulting model is less probable than the standard cosmology for the reason stated above. Vagnozzi then argues that if a new fixed value of, say, w = -1.3 is introduced then the resulting model is not penalized by having to spread the prior probability out over a range of values but puts all its prior eggs in one basket labelled w = -1.3.

This is of course true. The problem is that the value of w = -1.3 does not derive from any ab initio principle of physics but by a posteriori of the inference described above. It’s no surprise that you can get a better answer if you know what outcome you want. I find that I am very good at forecasting the football results if I make my predictions after watching Final Score

Indeed, many cosmologists think any value of w < -1 should be ruled out ab initio because they don’t make physical sense anyway.

 

 

 

Advertisements

Hubble’s Constant – A Postscript on w

Posted in The Universe and Stuff with tags , , , , , , , on July 15, 2019 by telescoper

Last week I posted about new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. You can download a PDF of the full paper here.

Reading the paper through over the weekend I was struck by Figure 6:

This shows the constraints on H0 and the parameter w which is used to describe the dark energy component. Bear in mind that these estimates of cosmological parameters actually involve the simultaneous estimation of several parameters, six in the case of the standard ΛCDM model. Incidentally, H0 is not one of the six basic parameters of the standard model – it is derived from the others – and some important cosmological observations are relatively insensitive to its value.

The parameter w is the equation of state parameter for the dark energy component so that the pressure p is related to the energy density ρc2 via p=wρc2. The fixed value w=-1 applies if the dark energy is of the form of a cosmological constant (or vacuum energy). I explained why here. Non-relativistic matter (dominated by rest-mass energy) has w=0 while ultra-relativistic matter has w=1/3.

Applying the cosmological version of the thermodynamic relation for adiabatic expansion  “dE=-pdV” one finds that ρ ∼ a-3(1+w) where a is the cosmic scale factor. Note that w=-1 gives a constant energy density as the Universe expands (the cosmological constant); w=0 gives ρ ∼ a-3, as expected for `ordinary’ matter.

As I already mentioned, in the standard cosmological model w is fixed at  w=-1 but if it is treated as a free parameter then it can be added to the usual six to produce the Figure shown above. I should add for Bayesians that this plot shows the posterior probability assuming a uniform prior on w.

What is striking is that the data seem to prefer a very low value of w. Indeed the peak of the likelihood (which determines the peak of the posterior probability if the prior is flat) appears to be off the bottom of the plot. It must be said that the size of the black contour lines (at one sigma and two sigma for dashed and solid lines respectively) suggests that these data aren’t really very informative; the case w=-1 is well within the 2σ contour. In other words, one might get a slightly better fit by allowing the equation of state parameter to float, but the quality of the fit might not improve sufficiently to justify the introduction of another parameter.

Nevertheless it is worth mentioning that if it did turn out, for example, that w=-2 that would imply ρ ∼ a+3, i.e. an energy density that increases steeply as a increases (i.e. as the Universe expands). That would be pretty wild!

On the other hand, there isn’t really any physical justification for cases with w<-1 (in terms of a plausible model) which, in turn, makes me doubt the reasonableness of imposing a flat prior. My own opinion is that if dark energy turns out not to be of the simple form of a cosmological constant then it is likely to be too complicated to be expressed in terms of a single number anyway.

 

Postscript to this postscript: take a look at this paper from 2002!

Dark Energy – Lectures by Varun Sahni

Posted in The Universe and Stuff with tags , , on June 9, 2019 by telescoper

I thought I’d share this lecture course about Dark Energy here. It was delivered by Varun Sahni at an international school on cosmology earlier this year. The material is quite technical in places but I’m sure these lectures will prove a very helpful introduction to, for example, new PhD students in this area. Varun has been a very good friend and colleague of mine for many years, and he is an excellent lecturer!

Here are the three lectures:

The Negative Mass Bug

Posted in Astrohype, Open Access, The Universe and Stuff with tags , , , , , on February 25, 2019 by telescoper

You may have noticed that some time ago I posted about  a paper by Jamie Farnes published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter.

Here is the abstract of said paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

Well there’s a new paper just out on the arXiv by Hector Socas-Navarro with the abstract

A recent work by Farnes (2018) proposed an alternative cosmological model in which both dark matter and dark energy are replaced with a single fluid of negative mass. This paper presents a critical review of that model. A number of problems and discrepancies with observations are identified. For instance, the predicted shape and density of galactic dark matter halos are incorrect. Also, halos would need to be less massive than the baryonic component or they would become gravitationally unstable. Perhaps the most challenging problem in this theory is the presence of a large-scale version of the `runaway’ effect, which would result in all galaxies moving in random directions at nearly the speed of light. Other more general issues regarding negative mass in general relativity are discussed, such as the possibility of time-travel paradoxes.

Among other things there is this:

After initially struggling to reproduce the F18 results, a careful inspection of his source code revealed a subtle bug in the computation of the gravitational acceleration. Unfortunately, the simulations in F18 are seriously compromised by this coding error whose effect is that the gravitational force decreases with the inverse of the distance, instead of the distance squared.

Oh dear.

I don’t think I need go any further into this particular case, which would just rub salt into the wounds of Farnes (2018) but I will make a general comment. Peer review is the best form of quality stamp that we have but, as this case demonstrates, it is by no means flawless. The paper by Farnes (2018) was refereed and published, but is now shown to be wrong*. Just as authors can make mistakes so can referees. I know I’ve screwed up as a referee in the past so I’m not claiming to be better than anyone in saying this.

*This claim is contested: see the comment below.

I don’t think the lesson is that we should just scrap peer review, but I do think we need to be more imaginative about how it is used than just relying on one or two individuals to do it. This case shows that science eventually works, as the error was found and corrected, but that was only possible because the code used by Farnes (2018) was made available for scrutiny. This is not always what happens. I take this as a vindication of open science, and an example of why scientists should share their code and data to enable others to check the results. I’d like to see a system in which papers are not regarded as `final’ documents but things which can be continuously modified in response to independent scrutiny, but that would require a major upheaval in academic practice and is unlikely to happen any time soon.

In this case, in the time since publication there has been a large amount of hype about the Farnes (2018) paper, and it’s unlikely that any of the media who carried stories about the results therein will ever publish retractions. This episode does therefore illustrate the potentially damaging effect on public trust that the excessive thirst for publicity can have. So how do we balance open science against the likelihood that wrong results will be taken up by the media before the errors are found? I wish I knew!

The Future Circular Collider: what’s the MacGuffin?

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , on February 7, 2019 by telescoper

I’ve been reading a few items here and there about proposals for a Future Circular Collider, even larger than the Large Hadron Collider (and consequently even more expensive). No doubt particle physicists interested in accelerator experiments will be convinced this is the right move, but of course there are other projects competing for funds and it’s by no means certain that the FCC will actually happen.

One of the important things about `Big Science’ when it gets this big is that it has to capture the imagination of people with political influence if it is to be granted funding. Based on past experience that means that there has to be a Big Discovery to be made or a Big Idea to be tested. This Big Thing has to be simple enough for politicians to understand and exciting enough to capture their imagination (and that of the public). In the case of the Large Hadron Collider (LHC), for example, this was the Higgs Boson. In the case of the Euclid space mission, the motivation is Dark Energy.

The Big Thing that sells a project to politicians is not necessarily the thing that most scientists are interested in. The LHC has done a lot of things other than discover the Higgs, and Euclid will do many things other than probe Dark Energy, but there has to be one thing to set it all in motion. It seems to me that the Big Question about the FCC is whether there is something specific that can motivate this project in the way the Higgs did for the LHC? If so, what is it?

Answers on a postcard or, better, through the comments box below.

 

Humphrey Bogart with the eponymous Maltese Falcon

Anyway, these thoughts reminded me of the concept of a  MacGuffin. Unpick the plot of any thriller or suspense movie and the chances are that somewhere within it you will find lurking at least one MacGuffin. This might be a tangible thing, such the eponymous sculpture of a Falcon in the archetypal noir classic The Maltese Falcon or it may be rather nebulous, like the “top secret plans” in Hitchcock’s The Thirty Nine Steps. Its true character may be never fully revealed, such as in the case of the glowing contents of the briefcase in Pulp Fiction , which is a classic example of the “undisclosed object” type of MacGuffin, or it may be scarily obvious, like a doomsday machine or some other “Big Dumb Object” you might find in a science fiction thriller.

Or the MacGuffin may not be a real thing at all. It could be an event or an idea or even something that doesn’t actually exist in any sense, such the fictitious decoy character George Kaplan in North by Northwest. In fact North by North West is an example of a movie with more than one MacGuffin. Its convoluted plot involves espionage and the smuggling of what is only cursorily described as “government secrets”. These are the main MacGuffin; George Kaplan is a sort of sub-MacGuffin. But although this is behind the whole story, it is the emerging romance, accidental betrayal and frantic rescue involving the lead characters played by Cary Grant and Eve Marie Saint that really engages the characters and the audience as the film gathers pace. The MacGuffin is a trigger, but it soon fades into the background as other factors take over.

Whatever it is or is not, the MacGuffin is responsible for kick-starting the plot. It makes the characters embark upon the course of action they take as the tale begins to unfold. This plot device was particularly beloved by Alfred Hitchcock (who was responsible for introducing the word to the film industry). Hitchcock was however always at pains to ensure that the MacGuffin never played as an important a role in the mind of the audience as it did for the protagonists. As the plot twists and turns – as it usually does in such films – and its own momentum carries the story forward, the importance of the MacGuffin tends to fade, and by the end we have usually often forgotten all about it. Hitchcock’s movies rarely bother to explain their MacGuffin(s) in much detail and they often confuse the issue even further by mixing genuine MacGuffins with mere red herrings.

Here is the man himself explaining the concept at the beginning of this clip. (The rest of the interview is also enjoyable, convering such diverse topics as laxatives, ravens and nudity..)

There’s nothing particular new about the idea of a MacGuffin. I suppose the ultimate example is the Holy Grail in the tales of King Arthur and the Knights of the Round Table in which the Grail itself is basically a peg on which to hang a series of otherwise disconnected stories. It is barely mentioned once each individual story has started and, of course, is never found. That’s often how it goes with MacGuffins -even the Maltese Falcon turned out in the end to be a fake – they’re only really needed to start things off.

So let me rephrase the question I posed earlier on. In the case of the Future Circular Collider, what’s the MacGuffin?

Negative Mass, Phlogiston and the State of Modern Cosmology

Posted in Astrohype, The Universe and Stuff with tags , , on December 7, 2018 by telescoper

A graphical representation of something or other.

I’ve noticed a modest amount of hype – much of it gibberish – going around about a paper published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter. Here is the abstract of the paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

For a skeptical commentary on this work, see here.

The idea of negative mass is no by no means new, of course. If you had asked a seventeenth century scientist the question “what happens when something burns?”  the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning separated the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight implying an increase in mass of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative mass. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, `levity’. Nowadays we would probably say `anti-gravity.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

The standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of “dark energy”. We don’t know much about what this is, except that in order to make our current understanding work out it has to act like a source of anti-gravity. It does this by violating the strong energy condition of general relativity.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe.

A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

I’ve blogged before, with some levity of my own, about how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists.

Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe, as the paper that prompted this piece might be taken to suggest, the dark energy really is something like phlogiston. At least I prefer the name to quintessence. However, I think the author has missed a trick. I think to create a properly trendy cosmological theory he should include the concept of supersymmetry, according to which there should be a Fermionic counterpart of phlogiston called the phlogistino..

Strong constraints on cosmological gravity from GW170817 and GRB 170817A

Posted in The Universe and Stuff with tags , , , , , on October 24, 2017 by telescoper

One of the many interesting scientific results to emerge from last week’s announcement of a gravitational wave source (GW170817) with an electromagnetic counterpart (GRB 170817A) is the fact that it provides constraints on departures from Einstein’s General Theory of Relativity. In particular the (lack of a) time delay between the arrival of the gravitational and electromagnetic signals can be used to rule out models that predict that gravitational waves and electromagnetic waves travel with different speeds. The fractional time delay associated with this source is constrained to be less than 10-17 which actually rules out many of the proposed alternatives to general relativity. Modifications of Einstein’s gravity have been proposed for a number of reasons, including the desire to explain the dynamics of the expanding Universe without the need for Dark Energy or Dark Matter (or other exotica), but many of these are now effectively dead.

Anyway, I bookmarked a nice paper about this last week while I was in India but forgot to post it then, so if you’re interested in reading more about this have a look at this arXiv paper by Baker et al., which has the following abstract:

The detection of an electromagnetic counterpart (GRB 170817A) to the gravitational wave signal (GW170817) from the merger of two neutron stars opens a completely new arena for testing theories of gravity. We show that this measurement allows us to place stringent constraints on general scalar-tensor and vector-tensor theories, while allowing us to place an independent bound on the graviton mass in bimetric theories of gravity. These constraints severely reduce the viable range of cosmological models that have been proposed as alternatives to general relativistic cosmology.