## One Hundred Years of the Cosmological Constant: from ‘Superfluous Stunt’ to Dark Energy

Posted in History, The Universe and Stuff with tags , , , on November 21, 2017 by telescoper

Some months ago I did a little post on the occasion of the 100th anniversary of the introduction of the cosmological constant which included a link to the original paper on this subject by Albert Einstein. A nice thread of well-informed comments followed that post and one of the contributors to that thread, Cormac O’Raifeartaigh, is lead author of a paper that has just appeared on the arXiv. It’s quite a lengthy paper (62 pages) that gives an account of the cosmological constant in the context of modern observational cosmology. You can get a PDF of the paper here. It’s well worth reading!

We present a centennial review of the history of the term known as the cosmological constant. First introduced to the general theory of relativity by Einstein in 1917 in order to describe a universe that was assumed to be static, the term fell from favour in the wake of the discovery of cosmic the expanding universe, only to make a dramatic return in recent times. We consider historical and philosophical aspects of the cosmological constant over four main epochs: (i) the use of the term in static cosmologies (both Newtonian and relativistic; (ii) the marginalization of the term following the discovery of cosmic expansion; (iii) the use of the term to address specific cosmic puzzles such as the timespan of expansion, the formation of galaxies and the redshifts of the quasars; (iv) the re-emergence of the term in today’s Lamda-CDM cosmology. We find that the cosmological constant was never truly banished from theoretical models of the universe, but was sidelined by astronomers for reasons of convenience. We also find that the return of the term to the forefront of modern cosmology did not occur as an abrupt paradigm shift due to one particular set of observations, but as the result of a number of empirical advances such as the measurement of present cosmic expansion using the Hubble Space Telescope, the measurement of past expansion using type SN 1a supernovae as standard candles, and the measurement of perturbations in the cosmic microwave background by balloon and satellite. We give a brief overview of contemporary interpretations of the physics underlying the cosmic constant and conclude with a synopsis of the famous cosmological constant problem.

## Cosmology beyond the Centenary of Λ

Posted in Talks and Reviews, The Universe and Stuff with tags , , on June 6, 2017 by telescoper

I didn’t expect to be doing anything other than listening to the talks and getting updated on the progress of the Euclid project at this meeting in London, but this morning I was roped in to introduce a public event tomorrow evening, called Cosmology beyond the Centenary of Λ:

This will take the form of a dialogue/discussion/debate between two leading cosmologists taking a `big picture’ view of the state of cosmology now and likely future developments. I’m sure it will be very friendly so I won’t use any form of language that suggests confrontation but it features, in the red corner, George Efstathiou of the University of Cambridge and, in the blue corner, Ofer Lahav of University College London.

Incidentally, I posted some months ago about the fact that this is the centenary year of Einstein’s introduction of the cosmological constant into the field equations of general relativity in this paper:

I recommend anyone attending this Euclid meeting and indeed anyone with a passing interest in cosmology to read that paper – it’s very different from what you might probably imagine it to be!

## One Hundred Years of the Cosmological Constant

Posted in History, The Universe and Stuff with tags , , , , , , on February 8, 2017 by telescoper

It was exactly one hundred years ago today – on 8th February 1917 – that a paper was published in which Albert Einstein explored the cosmological consequences of his general theory of relativity, in the course of which he introduced the concept of the cosmological constant.

For the record the full reference to the paper is: Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie and it was published in the Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften. You can find the full text of the paper here. There’s also a nice recent discussion of it by Cormac O’Raifeartaigh  and others on the arXiv here.

Here is the first page:

It’s well worth looking at this paper – even if your German is as rudimentary as mine – because the argument Einstein constructs is rather different from what you might imagine (or at least that’s what I thought when I first read it). As you see, it begins with a discussion of a modification of Poisson’s equation for gravity.

As is well known, Einstein introduced the cosmological constant in order to construct a static model of the Universe. The 1917 paper pre-dates the work of Friedman (1923) and Lemaître (1927) that established much of the language and formalism used to describe cosmological models nowadays, so I thought it might be interesting just to recapitulate the idea using modern notation. Actually, in honour of the impending centenary I did this briefly in my lecture on Physics of the Early Universe yesterday.

To simplify matters I’ll just consider a “dust” model, in which pressure can be neglected. In this case, the essential equations governing a cosmological model satisfying the Cosmological Principle are:

$\ddot{a} = -\frac{4\pi G \rho a }{3} +\frac{\Lambda a}{3}$

and

$\dot{a}^2= \frac{8\pi G \rho a^2}{3} +\frac{\Lambda a^2}{3} - kc^2.$

In these equations $a(t)$ is the cosmic scale factor (which measures the relative size of the Universe) and dots are derivatives with respect to cosmological proper time, $t$. The density of matter is $\rho>0$ and the cosmological constant is $\Lambda$. The quantity $k$ is the curvature of the spatial sections of the model, i.e. the surfaces on which $t$ is constant.

Now our task is to find a solution of these equations with $a(t)= A$, say, constant for all time, i.e. that $\dot{a}=0$ and $\ddot{a}=0$ for all time.

The first thing to notice is that if $\Lambda=0$ then this is impossible. One can solve the second equation to make the LHS zero at a particular time by matching the density term to the curvature term, but that only makes a universe that is instantaneously static. The second derivative is non-zero in this case so the system inevitably evolves away from the situation in which $\dot{a}=0$.

With the cosmological constant term included, it is a different story. First make $\ddot{a}=0$  in the first equation, which means that

$\Lambda=4\pi G \rho.$

Now we can make $\dot{a}=0$ in the second equation by setting

$\Lambda a^2 = 4\pi G \rho a^2 = kc^2$

This gives a static universe model, usually called the Einstein universe. Notice that the curvature must be positive, so this a universe of finite spatial extent but with infinite duration.

This idea formed the basis of Einstein’s own cosmological thinking until the early 1930s when observations began to make it clear that the universe was not static at all, but expanding. In that light it seems that adding the cosmological constant wasn’t really justified, and it is often said that Einstein regard its introduction as his “biggest blunder”.

I have two responses to that. One is that general relativity, when combined with the cosmological principle, but without the cosmological constant, requires the universe to be dynamical rather than static. If anything, therefore, you could argue that Einstein’s biggest blunder was to have failed to predict the expansion of the Universe!

The other response is that, far from it being an ad hoc modification of his theory, there are actually sound mathematical reasons for allowing the cosmological constant term. Although Einstein’s original motivation for considering this possibility may have been misguided, he was justified in introducing it. He was right if, perhaps, for the wrong reasons. Nowadays observational evidence suggests that the expansion of the universe may be accelerating. The first equation above tells you that this is only possible if $\Lambda\neq 0$.

Finally, I’ll just mention another thing in the light of the Einstein (1917) paper. It is clear that Einstein thought of the cosmological as a modification of the left hand side of the field equations of general relativity, i.e. the part that expresses the effect of gravity through the curvature of space-time. Nowadays we tend to think of it instead as a peculiar form of energy (called dark energy) that has negative pressure. This sits on the right hand side of the field equations instead of the left so is not so much a modification of the law of gravity as an exotic form of energy. You can see the details in an older post here.

## Tension in Cosmology?

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , on October 24, 2013 by telescoper

I noticed this abstract (of a paper by Rest et al.) on the arXiv the other day:

We present griz light curves of 146 spectroscopically confirmed Type Ia Supernovae (0.03<z<0.65) discovered during the first 1.5 years of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We have investigated spatial and time variations in the photometry, and we find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the HST Calspec definition of the AB system. We discuss our efforts to minimize the systematic uncertainties in the photometry. A Hubble diagram is constructed with a subset of 112 SNe Ia (out of the 146) that pass our light curve quality cuts. The cosmological fit to 313 SNe Ia (112 PS1 SNe Ia + 201 low-z SNe Ia), using only SNe and assuming a constant dark energy equation of state and flatness, yields w = -1.015^{+0.319}_{-0.201}(Stat)+{0.164}_{-0.122}(Sys). When combined with BAO+CMB(Planck)+H0, the analysis yields \Omega_M = 0.277^{+0.010}_{-0.012} and w = -1.186^{+0.076}_{-0.065} including all identified systematics, as spelled out in the companion paper by Scolnic et al. (2013a). The value of w is inconsistent with the cosmological constant value of -1 at the 2.4 sigma level. This tension has been seen in other high-z SN surveys and endures after removing either the BAO or the H0 constraint. If we include WMAP9 CMB constraints instead of those from Planck, we find w = -1.142^{+0.076}_{-0.087}, which diminishes the discord to <2 sigma. We cannot conclude whether the tension with flat CDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 supernova sample will be 3 times as large as this initial sample, which should provide more conclusive results.

The mysterious Pan-STARRS stands for the Panoramic Survey Telescope and Rapid Response System, a set of telescopes cameras and related computing hardware that monitors the sky from its base in Hawaii. One of the many things this system can do is detect and measure distant supernovae, hence the particular application to cosmology described in the paper. The abstract mentions a preliminary measurement of the parameter w, which for those of you who are not experts in cosmology is usually called the “equation of state” parameter for the dark energy component involved in the standard model. What it describes is the relationship between the pressure P and the energy density ρc2 of this mysterious stuff, via the relation P=wρc2. The particularly interesting case is w=-1 which corresponds to a cosmological constant term; see here for a technical discussion. However, we don’t know how to explain this dark energy from first principles so really w is a parameter that describes our ignorance of what is actually going on. In other words, the cosmological constant provides the simplest model of dark energy but even in that case we don’t know where it comes from so it might well be something different; estimating w from surveys can therefore tell us whether we’re on the right track or not.

The abstract explains that, within the errors, the Pan-STARRS data on their own are consistent with w=-1. More interestingly, though, combining the supernovae observations with others, the best-fit value of w shifts towards a value a bit less than -1 (although still with quite a large uncertainty). Incidentally  value of w less than -1 is generally described as a “phantom” dark energy component. I’ve never really understood why…

So far estimates of cosmological parameters from different data sets have broadly agreed with each other, hence the application of the word “concordance” to the standard cosmological model.  However, it does seem to be the case that supernova measurements do generally seem to push cosmological parameter estimates away from the comfort zone established by other types of observation. Could this apparent discordance be signalling that our ideas are wrong?

That’s the line pursued by a Scientific American article on this paper entitled “Leading Dark Energy Theory Incompatible with New Measurement”. This could be true, but I think it’s a bit early to be taking this line when there are still questions to be answered about the photometric accuracy of the Pan-Starrs survey. The headline I would have picked would be more like “New Measurement (Possibly) Incompatible With Other Measurements of Dark Energy”.

But that would have been boring…

## A Little Bit of Gravitational Lensing

Posted in The Universe and Stuff with tags , , , , , on December 30, 2012 by telescoper

I thought I’d take a short break from doing absolutely nothing to post a quick little item about gravitational lensing. It’s been in my mind to say something about this since I mentioned it in one of the lectures I gave just before Christmas, but I’ve been too busy (actually too disorganized) to do it until now. It’s all based on a paper posted to the arXiv in December which was led by Jo Woodward (née Short) who did her PhD with me in Cardiff and is now in a postdoctoral research position in Durham (which is in the Midlands). The following pictures were take from her paper.

This figure shows the geometry of a gravitational lens system: light from the source S is deflected by the gravitational potential of the lens L so that an image I appears at a position on the sky which is different from the actual position when viewed by the observer O:

There’s a critical radius (which depends on the mass and density profile of the lens) at which this can lead to the formation of multiple images of the source. Even if multiple images are not resolved, lensing results in an increase in the apparent brightness of the source.

A great deal of cosmological information can be gleaned statistically from lensing  with even limited knowledge of the properties of the source and lens populations and with incomplete information about e.g. the actual angular deflection produced by the lens or the lens mass. To illustrate this, just consider the expression for the differential optical depth to lensing (related to the probability that a source at redshift $z_s$ is lensed by an object at redshift $z_l$

The first two terms are cosmological, accounting geometrical and expansion effects. Roughly speaking, the larger the volume out to a given redshift the higher the probability is that a given source will be lensed. The third term involves the mass function of lens systems. In the framework of the standard cosmological model this can be computed using Press-Schechter theory or one of the variations thereof. According to current understanding, cosmological structures (i.e. galaxies and clusters of galaxies) form hierarchically so this mass function changes with redshift, with fewer high mass objects present at high redshift than at low redshift, as represented in this picture, in which masses are given in units of solar masses, the colour-coding representing different redshifts:

The last term represents the lensing cross-section of an object with a given mass. This depends on the internal structure of the lens – an object in which the mass is highly concentrated produces  lensing effects radically different from one that isn’t. Two simple models for the mass distribution are the singular isothermal sphere (SIS) and the Navarro-Frenk-White profile (NFW). The latter is thought (by some) to represent the distribution of cold dark matter in haloes around galaxies and clusters which is more diffuse than that of the baryonic material because it can’t dissipate energy which it needs to do to fall into the centre of the object. The real potential of a galaxy in its central regions could be more like the SIS profile would predict, however, because baryons outweigh dark matter there.

Now time for a bit of historical reminiscence. In 1997 I published a book with George Ellis in which we analysed the evidence available at the time relating to the density of matter in the Universe. It was a little bit controversial at the time, but it turns out we were correct in concluding that the density of matter was well below the level favoured by most theorists i.e. only about 20-30% of the critical density. However we did not find any compelling evidence at that time for a cosmological constant (or, if you prefer, dark energy). Indeed one of the strongest upper limits on the cosmological constant came from gravitational lensing measurements, or rather the dearth of them.

The reason for this negative conclusion was that, for a fixed value of the Hubble constant,  in the presence of a cosmological constant the volume out to a given redshift is much larger than if there is no cosmological constant. That means the above integral predicts a high probability for lensing. Surveys however failed to turn up large numbers of strongly-lensed objects, hence the inference that the universe could not be dominated by a cosmological constant. This is, of course, assuming that the other terms in the integral are well understood and that the reason significant numbers of lensed systems weren’t found wasn’t just they are tricky to identify…

Meanwhile, huge advances were made in other aspects of observational cosmology that established a standard cosmological model in which the cosmological constant makes up almost 75% of the energy budget of the Universe.

Now, 15 years later on, enter the Herschel Space Observatory, which turns out to be superb at identifying gravitational lenses.  I posted about this here, in fact. Working in the far-infrared makes it impossible to resolve multiple images with Herschel – even with a 3.5m mirror in space, λ/D isn’t great for wavelengths of 500 microns! However, the vast majority of sources found during the Herschel ATLAS survey with large fluxes at this wavelengths can be identified as lenses simply because their brightness tells us they’ve probably been magnified by a lens. Candidates can then be followed up with other telescopes on the ground.  A quick look during the Science Demonstration Phase of Herschel produced the first crop of firmly identified gravitational lens systems published in Science by Negrello et al..  When the full data set has been analysed there should be hundreds of such systems, which will revolutionize this field.

To see the potential (no pun intended) of this kind of measurement, take a look at these five systems from the SDP set:

These systems have measured (or estimated) source and lens redshifts. What is plotted is the conditional probability of a lens at some particular lens redshift, given the source redshift and the fact that strong lensing has occurred. Curves are given for SIS and NFW lens profiles and everything else is calculated according to the standard cosmological model. The green bars represent the measured lens redshifts.  It’s early days, so there are only five systems, but you can already see that they are pointing towards low lens redshifts, favouring NFW over SIS;  the yellow and light blue shading represents regions in which 68% of the likelihood lies.  These data don’t strongly prefer one model over the other, but with hundreds more, and extra information about at least some of the lens systems (such as detailed determinations of the lens mass from deflections etc) we should be able  to form more definite conclusions.

Unfortunately the proposal I submitted to STFC to develop a more detailed theoretical model and statistical analysis pipeline (Bayesian, of course) wasn’t funded. C’est la vie. That probably just means that someone smarter and quicker than me will do the necessary…

## ESA Endorses Euclid

Posted in Science Politics, The Universe and Stuff with tags , , , , , , on June 20, 2012 by telescoper

I’m banned from my office for part of this morning because the PHYSX elves are doing mandatory safety testing of all my electrical whatnots. Hence, I’m staying at home, sitting in the garden, writing this little blog post about a bit of news I found on Twitter earlier.

Apparently the European Space Agency, or rather the Science Programme Committee thereof, has given the green light to a space mission called Euclid whose aim is to “map the geometry of the dark Universe”, i.e. mainly to study dark energy. Euclid is an M-class mission, pencilled in for launch in around 2019, and it is basically the result of a merger between two earlier proposals, the Dark Universe Explorer (DUNE, intended to measure effects of weak gravitational lensing) and the Spectroscopic All Sky Cosmic Explorer (SPACE, to measure wiggles in the galaxy power spectrum known as baryon acoustic oscillations); Euclid will do both of these.

Although I’m not directly involved, as a cosmologist I’m naturally very happy to see this mission finally given approval. To be honest, I am a bit sceptical about how much light Euclid will actually shed on the nature of dark energy, as I think the real issue is a theoretical not an observational one. It will probably end up simply measuring the cosmological constant to a few extra decimal places, which is hardly the issue when the value we try to calculate theoretically is a over a hundred orders of magnitude too large! On the other hand, big projects like this do need their MacGuffin..

The big concern being voiced by my colleagues, both inside and outside the cosmological community, is whether Euclid can actually be delivered within the agreed financial envelope (around 600 million euros). I’m not an expert in the technical issues relevant to this mission, but I’m told by a number of people who are that they are sceptical that the necessary instrumental challenges can be solved without going significantly over-budget. If the cost of Euclid does get inflated, that will have severe budgetary implications for the rest of the ESA science programme; I’m sure we all hope it doesn’t turn into another JWST.

I stand ready to be slapped down by more committed Euclideans for those remarks.

## Which side (of the Einstein equations) are you on?

Posted in The Universe and Stuff with tags , , , , , , on February 22, 2011 by telescoper

As a cosmologist, I am often asked why it is that people talk about the cosmological constant as if it were some sort of vacuum energy or “dark energy“. I was explaining it again to a student today so I thought I’d jot something down here so I can use it for future reference. In a nutshell, it goes like this. The original form of Einstein’s equations for general relativity can be written

$R_{ij}-\frac{1}{2}g_{ij}R = \frac{8\pi G}{c^4} T_{ij}.$

The precise meaning of the terms on the left hand side doesn’t really matter, but basically they describe the curvature of space-time and are derived from the Ricci tensor $R_{ij}$ and the metric tensor $g_{ij}$; this is how Einstein’s theory expresses the effect of gravity warping space. On the right hand side we have the energy-momentum tensor (sometimes called the stress tensor) $T_{ij}$, which describes the distribution of matter and its motion. Einstein’s equations can be summarised in John Archibald Wheeler’s pithy phrase: “Space tells matter how to move; matter tells space how to curve”.

In standard cosmology we usually assume that we can describe the matter-energy content of the Universe as a uniform perfect fluid, for which the energy-momentum tensor takes the simple form

$T_{ij} = -pg_{ij} +\left(p+\rho c^2\right) U_i U_j,$

in which $p$ is the pressure and $\rho$ the density; $U_i$ is the fluid’s 4-velocity.

Einstein famously modified (or perhaps generalised) the original equations by adding a cosmological constant term $\Lambda$ to the left hand side thus:

$R_{ij}-\frac{1}{2}g_{ij}R -\Lambda g_{ij} = \frac{8\pi G}{c^4} T_{ij}.$

Doing this essentially modifies the description of gravity, or appears to do so because it happens to be written on the left hand side of the equation. In fact one could equally well move the term involving $\Lambda$ to the other side and absorb it into a redefined energy-momentum tensor, $\tilde{T}_{ij}$:

$R_{ij}-\frac{1}{2}g_{ij}R = \frac{8\pi G}{c^4} \tilde{T}_{ij}.$

The new energy-momentum tensor needed to make this work is of the form

$\tilde{T}_{ij}=T_{ij}+ \left(\frac{\Lambda c^{4}}{8 \pi G} \right) g_{ij}= -\tilde{p} g_{ij} +\left(\tilde{p}+\tilde{\rho} c^2\right) U_i U_j$

where

$\tilde{p}=p-\frac{\Lambda c^4}{8\pi G}$

$\tilde{\rho}=\rho + \frac{\Lambda c^4}{8\pi G}$

So the cosmological constant now looks like you didn’t modify gravity at all, but created an additional contribution to the pressure and density of the original fluid. In fact, considering the correction terms on their own it is clear that the cosmological constant acts exactly like an additional perfect fluid contribution with $p=-\rho c^2$.

This is just one simple example wherein a modification of the gravitational part of the theory can be made to look like the appearance of a peculiar form of matter. More complicated versions of this idea – most of them entirely speculative – abound in theoretical cosmology. That’s just what cosmologists are like.

Over the last few decades cosmology has suffered an invasion by been stimulated and enriched by particle physicists who would like to understand how such a mysterious form of energy might arise in their theories. That at least partly explains why, in one sense at least,  modern cosmologists prefer to dress to the right.

Incidentally, another interesting point is why people say such a fluid describes a cosmological “vacuum” energy. In the cosmological setting, i.e. assuming the fluid is distributed in  a homogeneous and isotropic fashion then the energy density of the expanding Universe varies with (cosmological proper) time according to

$\dot{\rho}=-3\left(\frac{\dot{a}}{a}\right) \left(\rho + \frac{p}{c^2}\right)$

so for our strange fluid, the second term in brackets vanishes and we have $\dot{\rho}=0$. As the universe expands, normal forms of matter and radiation get diluted, but the energy density of this stuff remains constant. It seems to me to be quite appropriate for a vacuum to something which, no matter how hard you try,  you can’t dilute!

I hope this clarifies the situation.