Archive for Dark Energy

Is there a kinematic backreaction in cosmology?

Posted in The Universe and Stuff with tags , , , , , on March 28, 2017 by telescoper

I just noticed that a paper has appeared on the arXiv with the confident title There is no kinematic backreaction. Normally one can be skeptical about such bold claims, but this one is written by Nick Kaiser and he’s very rarely wrong…

The article has a very clear abstract:

Kaiser

This is an important point of debate, because the inference that the universe is dominated by dark energy (i.e. some component of the cosmic energy density that violates the strong energy condition) relies on the assumption that the distribution of matter is homogeneous and isotropic (i.e. that the Universe obeys the Cosmological Principle). Added to the assumption that the large-scale dynamics of the Universe are described by the general theory of relativity, this means that we evolution of the cosmos is described by the Friedmann equations. It is by comparison with the Friedmann equations that we can infer the existence of dark energy from the apparent change in the cosmic expansion rate over time.

But the Cosmological Principle can only be true in an approximate sense, on very large scales, as the universe does contain galaxies, clusters and superclusters. It has been a topic of some discussion over the past few years as to whether the formation of cosmic structure may influence the expansion rate by requiring extra terms that do not appear in the Friedmann equations.

Nick Kaiser says `no’. It’s a succinct and nicely argued paper but it is entirely Newtonian. It seems to me that if you accept that his argument is correct then the only way you can maintain that backreaction can be significant is by asserting that it is something intrinsically relativistic that is not covered by a Newtonian argument. Since all the relevant velocities are much less than that of light and the metric perturbations generated by density perturbations are small (~10-5) this seems a hard case to argue.

I’d be interested in receiving backreactions to this paper via the comments box below.

One Hundred Years of the Cosmological Constant

Posted in History, The Universe and Stuff with tags , , , , , , on February 8, 2017 by telescoper

It was exactly one hundred years ago today – on 8th February 1917 – that a paper was published in which Albert Einstein explored the cosmological consequences of his general theory of relativity, in the course of which he introduced the concept of the cosmological constant.

For the record the full reference to the paper is: Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie and it was published in the Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften. You can find the full text of the paper here. There’s also a nice recent discussion of it by Cormac O’Raifeartaigh  and others on the arXiv here.

Here is the first page:

cosmo

It’s well worth looking at this paper – even if your German is as rudimentary as mine – because the argument Einstein constructs is rather different from what you might imagine (or at least that’s what I thought when I first read it). As you see, it begins with a discussion of a modification of Poisson’s equation for gravity.

As is well known, Einstein introduced the cosmological constant in order to construct a static model of the Universe. The 1917 paper pre-dates the work of Friedman (1923) and Lemaître (1927) that established much of the language and formalism used to describe cosmological models nowadays, so I thought it might be interesting just to recapitulate the idea using modern notation. Actually, in honour of the impending centenary I did this briefly in my lecture on Physics of the Early Universe yesterday.

To simplify matters I’ll just consider a “dust” model, in which pressure can be neglected. In this case, the essential equations governing a cosmological model satisfying the Cosmological Principle are:

\ddot{a} = -\frac{4\pi G \rho a }{3} +\frac{\Lambda a}{3}

and

\dot{a}^2= \frac{8\pi G \rho a^2}{3} +\frac{\Lambda a^2}{3} - kc^2.

In these equations a(t) is the cosmic scale factor (which measures the relative size of the Universe) and dots are derivatives with respect to cosmological proper time, t. The density of matter is \rho>0 and the cosmological constant is \Lambda. The quantity k is the curvature of the spatial sections of the model, i.e. the surfaces on which t is constant.

Now our task is to find a solution of these equations with a(t)= A, say, constant for all time, i.e. that \dot{a}=0 and \ddot{a}=0 for all time.

The first thing to notice is that if \Lambda=0 then this is impossible. One can solve the second equation to make the LHS zero at a particular time by matching the density term to the curvature term, but that only makes a universe that is instantaneously static. The second derivative is non-zero in this case so the system inevitably evolves away from the situation in which $\dot{a}=0$.

With the cosmological constant term included, it is a different story. First make \ddot{a}=0  in the first equation, which means that

\Lambda=4\pi G \rho.

Now we can make \dot{a}=0 in the second equation by setting

\Lambda a^2 = 4\pi G \rho a^2 = kc^2

This gives a static universe model, usually called the Einstein universe. Notice that the curvature must be positive, so this a universe of finite spatial extent but with infinite duration.

This idea formed the basis of Einstein’s own cosmological thinking until the early 1930s when observations began to make it clear that the universe was not static at all, but expanding. In that light it seems that adding the cosmological constant wasn’t really justified, and it is often said that Einstein regard its introduction as his “biggest blunder”.

I have two responses to that. One is that general relativity, when combined with the cosmological principle, but without the cosmological constant, requires the universe to be dynamical rather than static. If anything, therefore, you could argue that Einstein’s biggest blunder was to have failed to predict the expansion of the Universe!

The other response is that, far from it being an ad hoc modification of his theory, there are actually sound mathematical reasons for allowing the cosmological constant term. Although Einstein’s original motivation for considering this possibility may have been misguided, he was justified in introducing it. He was right if, perhaps, for the wrong reasons. Nowadays observational evidence suggests that the expansion of the universe may be accelerating. The first equation above tells you that this is only possible if \Lambda\neq 0.

Finally, I’ll just mention another thing in the light of the Einstein (1917) paper. It is clear that Einstein thought of the cosmological as a modification of the left hand side of the field equations of general relativity, i.e. the part that expresses the effect of gravity through the curvature of space-time. Nowadays we tend to think of it instead as a peculiar form of energy (called dark energy) that has negative pressure. This sits on the right hand side of the field equations instead of the left so is not so much a modification of the law of gravity as an exotic form of energy. You can see the details in an older post here.

A Non-accelerating Universe?

Posted in Astrohype, The Universe and Stuff with tags , , , , , on October 26, 2016 by telescoper

There’s been quite a lot of reaction on the interwebs over the last few days much of it very misleading; here’s a sensible account) to a paper by Nielsen, Guffanti and Sarkar which has just been published online in Scientific Reports, an offshoot of Nature. I think the above link should take you an “open access” version of the paper but if it doesn’t you can find the arXiv version here. I haven’t cross-checked the two versions so the arXiv one may differ slightly.

Anyway, here is the abstract:

The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.

Obviously I haven’t been able to repeat the statistical analysis but I’ve skimmed over what they’ve done and as far as I can tell it looks a fairly sensible piece of work (although it is a frequentist analysis). Here is the telling plot (from the Nature version)  in terms of the dark energy (y-axis) and matter (x-axis) density parameters:

lambda

Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter (a special case is the origin on the plot, which is called the Milne model and represents an entirely empty universe). The contours show “1, 2 and 3σ” contours, regarding all other parameters as nuisance parameters. It is true that the line of no acceleration does go inside the 3σcontour so in that sense is not entirely inconsistent with the data. On the other hand, the “best fit” (which is at the point Ωm=0.341, ΩΛ=0.569) does represent an accelerating universe.

I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. The CMB, for example, is particularly sensitive to spatial curvature which, measurements tells us, must be close to zero. The Milne model, on the other hand, has a large (negative) spatial curvature entirely excluded by CMB observations. Curvature is regarded as a “nuisance parameter” in the above diagram.

I think this paper is a worthwhile exercise. Subir Sarkar (one of the authors) in particular has devoted a lot of energy to questioning the standard ΛCDM model which far too many others accept unquestioningly. That’s a noble thing to do, and it is an essential part of the scientific method, but this paper only looks at one part of an interlocking picture. The strongest evidence comes from the cosmic microwave background and despite this reanalysis I feel the supernovae measurements still provide a powerful corroboration of the standard cosmology.

Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework, and statistics can’t help us with that!

The Dark Energy MacGuffin

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , on December 19, 2015 by telescoper

Back from a two-day meeting in Edinburgh about the Euclid Mission, I have to spend a couple of days this weekend in the office before leaving for the holidays. I was a bit surprised at the end of the meeting to be asked if I would be on the panel for the closing discussion, discussing questions raised by the audience. The first of these questions was – and I have to paraphrase becase I don’t remember exactly – whether it would be disappointing if the Euclid mission merely confirmed that observations were consistent with a “simple” cosmological constant rather than any of the more exotic (and perhaps more exciting) alternatives that have been proposed by theorists. I think that’s the likely outcome of Euclid, actually, and I don’t think it would be disappointing if it turned out to be the case. Moreover, testing theories of dark energy is just one of the tasks this mission will undertake and it may well be the case that in years to come Euclid is remembered for something other than dark energy. Anyway, this all triggered a memory of an old post of mine about Alfred Hitchcock so with apologies for repeating something I blogged about 4 years ago, here is a slight reworking of an old piece.

–0–

Unpick the plot of any thriller or suspense movie and the chances are that somewhere within it you will find lurking at least one MacGuffin. This might be a tangible thing, such the eponymous sculpture of a Falcon in the archetypal noir classic The Maltese Falcon or it may be rather nebulous, like the “top secret plans” in Hitchcock’s The Thirty Nine Steps. Its true character may be never fully revealed, such as in the case of the glowing contents of the briefcase in Pulp Fiction , which is a classic example of the “undisclosed object” type of MacGuffin, or it may be scarily obvious, like a doomsday machine or some other “Big Dumb Object” you might find in a science fiction thriller. It may even not be a real thing at all. It could be an event or an idea or even something that doesn’t exist in any real sense at all, such the fictitious decoy character George Kaplan in North by Northwest. In fact North by North West is an example of a movie with more than one MacGuffin. Its convoluted plot involves espionage and the smuggling of what is only cursorily described as “government secrets”. These are the main MacGuffin; George Kaplan is a sort of sub-MacGuffin. But although this is behind the whole story, it is the emerging romance, accidental betrayal and frantic rescue involving the lead characters played by Cary Grant and Eve Marie Saint that really engages the characters and the audience as the film gathers pace. The MacGuffin is a trigger, but it soon fades into the background as other factors take over.

Whatever it is or is not, the MacGuffin is responsible for kick-starting the plot. It makes the characters embark upon the course of action they take as the tale begins to unfold. This plot device was particularly beloved by Alfred Hitchcock (who was responsible for introducing the word to the film industry). Hitchcock was however always at pains to ensure that the MacGuffin never played as an important a role in the mind of the audience as it did for the protagonists. As the plot twists and turns – as it usually does in such films – and its own momentum carries the story forward, the importance of the MacGuffin tends to fade, and by the end we have usually often forgotten all about it. Hitchcock’s movies rarely bother to explain their MacGuffin(s) in much detail and they often confuse the issue even further by mixing genuine MacGuffins with mere red herrings.

Here is the man himself explaining the concept at the beginning of this clip. (The rest of the interview is also enjoyable, convering such diverse topics as laxatives, ravens and nudity..)

 

There’s nothing particular new about the idea of a MacGuffin. I suppose the ultimate example is the Holy Grail in the tales of King Arthur and the Knights of the Round Table and, much more recently, the Da Vinci Code. The original Grail itself is basically a peg on which to hang a series of otherwise disconnected stories. It is barely mentioned once each individual story has started and, of course, is never found.

Physicists are fond of describing things as “The Holy Grail” of their subject, such as the Higgs Boson or gravitational waves. This always seemed to me to be an unfortunate description, as the Grail quest consumed a huge amount of resources in a predictably fruitless hunt for something whose significance could be seen to be dubious at the outset.The MacGuffin Effect nevertheless continues to reveal itself in science, although in different forms to those found in Hollywood.

The Large Hadron Collider (LHC), switched on to the accompaniment of great fanfares a few years ago, provides a nice example of how the MacGuffin actually works pretty much backwards in the world of Big Science. To the public, the LHC was built to detect the Higgs Boson, a hypothetical beastie introduced to account for the masses of other particles. If it exists the high-energy collisions engineered by LHC should reveal its presence. The Higgs Boson is thus the LHC’s own MacGuffin. Or at least it would be if it were really the reason why LHC has been built. In fact there are dozens of experiments at CERN and many of them have very different motivations from the quest for the Higgs, such as evidence for supersymmetry.

Particle physicists are not daft, however, and they have realised that the public and, perhaps more importantly, government funding agencies need to have a really big hook to hang such a big bag of money on. Hence the emergence of the Higgs as a sort of master MacGuffin, concocted specifically for public consumption, which is much more effective politically than the plethora of mini-MacGuffins which, to be honest, would be a fairer description of the real state of affairs.

Even this MacGuffin has its problems, though. The Higgs mechanism is notoriously difficult to explain to the public, so some have resorted to a less specific but more misleading version: “The Big Bang”. As I’ve already griped, the LHC will never generate energies anything like the Big Bang did, so I don’t have any time for the language of the “Big Bang Machine”, even as a MacGuffin.

While particle physicists might pretend to be doing cosmology, we astrophysicists have to contend with MacGuffins of our own. One of the most important discoveries we have made about the Universe in the last decade is that its expansion seems to be accelerating. Since gravity usually tugs on things and makes them slow down, the only explanation that we’ve thought of for this perverse situation is that there is something out there in empty space that pushes rather than pulls. This has various possible names, but Dark Energy is probably the most popular, adding an appropriately noirish edge to this particular MacGuffin. It has even taken over in prominence from its much older relative, Dark Matter, although that one is still very much around.

We have very little idea what Dark Energy is, where it comes from, or how it relates to other forms of energy we are more familiar with, so observational astronomers have jumped in with various grandiose strategies to find out more about it. This has spawned a booming industry in surveys of the distant Universe (such as the Dark Energy Survey or the Euclid mission I mentioned in the preamble) all aimed ostensibly at unravelling the mystery of the Dark Energy. It seems that to get any funding at all for cosmology these days you have to sprinkle the phrase “Dark Energy” liberally throughout your grant applications.

The old-fashioned “observational” way of doing astronomy – by looking at things hard enough until something exciting appears (which it does with surprising regularity) – has been replaced by a more “experimental” approach, more like that of the LHC. We can no longer do deep surveys of galaxies to find out what’s out there. We have to do it “to constrain models of Dark Energy”. This is just one example of the not necessarily positive influence that particle physics has had on astronomy in recent times and it has been criticised very forcefully by Simon White.

Whatever the motivation for doing these projects now, they will undoubtedly lead to new discoveries. But my own view is that there will never be a solution of the Dark Energy problem until it is understood much better at a conceptual level, and that will probably mean major revisions of our theories of both gravity and matter. I venture to speculate that in twenty years or so people will look back on the obsession with Dark Energy with some amusement, as our theoretical language will have moved on sufficiently to make it seem irrelevant.

But that’s how it goes with MacGuffins. Even the Maltese Falcon turned out in the end to be a fake.

To Edinburgh for Euclid

Posted in The Universe and Stuff with tags , , on December 17, 2015 by telescoper

This morning I flew from London Gatwick to Edinburgh to attend the UK Euclid meeting at the Royal Observatory, which lasts today and tomorrow. It turns out there were two other astronomers on the plane: Alan Heavens from Imperial and Jon Loveday from my own institution, the University of Sussex.

The meeting is very useful for me as it involves a number of updates on the European Space Agency’s Euclid mission. For those of you who don’t know about Euclid here’s what it says on the tin:

Euclid is an ESA mission to map the geometry of the dark Universe. The mission will investigate the distance-redshift relationship and the evolution of cosmic structures by measuring shapes and redshifts of galaxies and clusters of galaxies out to redshifts ~2, or equivalently to a look-back time of 10 billion years. In this way, Euclid will cover the entire period over which dark energy played a significant role in accelerating the expansion

Here’s an artist’s impression of the satellite:

euclid

Do give you an idea of what an ambitious mission this is, it basically involves repeated imaging of a large fraction of the sky (~15,000 square degrees) over a period of about six years. Each image is so large that it would take 300 HD TV screens to display it at full resolution. The data challenge is considerable, and the signals Euclid is trying to measure are so small that observational systematics have to be controlled with exquisite precision. The requirements are extremely stringent, and there are many challenges to confront, but it’s going well so far. Oh, and there are about 1,200 people working on it!

Coincidentally, this very morning ESA issued a press release announcing that Euclid has passed its PDR (Preliminary Design Review) and is on track for launch in December 2020. I wouldn’t bet against that date slipping, however, as there is a great deal of work still to do and a number of things that could go wrong and cause delays. Nevertheless, so far so good!

 

 

Phlogiston, Dark Energy and Modified Levity

Posted in History, The Universe and Stuff with tags , , on May 21, 2015 by telescoper

What happens when something burns?

Had you aslked a seventeenth-century scientist that question and the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning was thought to separate the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative weight. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, “levity”. Nowadays we would probably say “anti-gravity”.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

So why am I rambling on about a scientific theory that has been defunct for more than two centuries?

Well,   there just might be a lesson from history about the state of modern cosmology. Not long ago I gave a talk in the fine city of Bath on the topic of Dark Energy and its Discontents. For the cosmologically uninitiated, the standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of this “dark energy”.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe. A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

We don’t know much about what this dark energy is, except that in order to make our current understanding work out it has to produce an effect something like anti-gravity, vaguely reminiscent of the “negative weight” hypothesis mentioned above. In most theories, the dark energy component does this by violating the strong energy condition of general relativity. Alternatively, it might also be accounted for by modifying our theory of gravity in such a way that accounts for anti-gravity in some other way. In the light of the discussion above maybe what we need is a new theory of levity? In other words, maybe we’re taking gravity too seriously?

Anyway, I don’t mind admitting how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists. Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe the dark energy really is phlogiston. That’s got to be worth a paper!

Ned Wright’s Dark Energy Piston

Posted in The Universe and Stuff with tags , , , , on April 29, 2015 by telescoper

Since Ned Wright picked up on the fact that I borrowed his famous Dark Energy Piston for my talk I thought I’d include it here in all its animated glory to explain a little bit better why I think it was worth taking the piston.

The two important things about dark energy that enable it to reconcile apparently contradictory observations within the framework of general relativity are: (i) that its energy-density does not decrease with the expansion of the Universe (as do other forms of energy, such as radiation); and (ii) that it has negative pressure which, among other things, means that it causes the expansion of the universe to accelerate.

piston-animThe Dark Energy Piston (above) shows how these two aspects are related. Suppose the chamber of the piston is filled with “stuff” that has the attributes described above. As the piston moves out the energy density of dark energy does not decrease, but its volume does, so the total amount of energy in the chamber must increase. Since the system depicted here consists only of the piston and the chamber, this extra energy must have been supplied as work done by the piston on the contents of the chamber. For this to have happened the stuff inside must have resisted being expanded, i.e. it must be in tension. In other words it has to have negative pressure.

Compare the case of “ordinary” matter, in the form of an ideal gas. In such a case the stuff inside the piston does work pushing it out, and the energy density inside the chamber would therefore decrease.

If it seems strange to you that something that is often called “vacuum energy” has the property that its density does not decrease when it subjected to expansion, then just consider that a pretty good definition of a vacuum is something that, when you do dilute it, you don’t any less!

So how does this dark vacuum energy stuff with negative pressure cause the expansion of the Universe to accelerate?

Well, here’s the equation that governs the dynamical evolution of the Universe:

DecelerationI’ve included a cosmological constant term (Λ) but ignore this for now. Note that if the pressure p is small (e.g. how it would be for cold dark matter) and the energy density ρ is positive (which it is for all forms of energy we know of) then in the absence of Λ the acceleration is always negative, i.e. the universe decelerates. This is in accord with intuition: because gravity always pulls we expect the expansion to slow down by the mutual attraction of all the matter. However, if the pressure is negative, the combination in brackets can be negative so can imply accelerated expansion.

In fact if dark energy stuff has an equation of state of the form p=-ρc2 then the combination in brackets leads to a fluid with precisely the same effect that a cosmological constant would have, so this is the simplest kind of dark energy.

When Einstein introduced the cosmological constant in 1915/6 he did it by modifying the left hand side of his field equations, essentially modifying the law of gravitation. This discussion shows that he could instead have modified the right hand side by introducing a vacuum energy with an equation of state p=-ρc2. A more detailed discussion of this can be found here.

Anyway, which way you like to think of dark energy the fact of the matter is that we don’t know how to explain it from a fundamental point of view. The only thing I can be sure of is that whatever it is in itself, dark energy is a truly terrible name for it.

I’d go for “persistent tension”…