## One Hundred Years of the Cosmological Constant

Posted in History, The Universe and Stuff with tags , , , , , , on February 8, 2017 by telescoper

It was exactly one hundred years ago today – on 8th February 1917 – that a paper was published in which Albert Einstein explored the cosmological consequences of his general theory of relativity, in the course of which he introduced the concept of the cosmological constant.

For the record the full reference to the paper is: Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie and it was published in the Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften. You can find the full text of the paper here. There’s also a nice recent discussion of it by Cormac O’Raifeartaigh  and others on the arXiv here.

Here is the first page:

It’s well worth looking at this paper – even if your German is as rudimentary as mine – because the argument Einstein constructs is rather different from what you might imagine (or at least that’s what I thought when I first read it). As you see, it begins with a discussion of a modification of Poisson’s equation for gravity.

As is well known, Einstein introduced the cosmological constant in order to construct a static model of the Universe. The 1917 paper pre-dates the work of Friedman (1923) and Lemaître (1927) that established much of the language and formalism used to describe cosmological models nowadays, so I thought it might be interesting just to recapitulate the idea using modern notation. Actually, in honour of the impending centenary I did this briefly in my lecture on Physics of the Early Universe yesterday.

To simplify matters I’ll just consider a “dust” model, in which pressure can be neglected. In this case, the essential equations governing a cosmological model satisfying the Cosmological Principle are:

$\ddot{a} = -\frac{4\pi G \rho a }{3} +\frac{\Lambda a}{3}$

and

$\dot{a}^2= \frac{8\pi G \rho a^2}{3} +\frac{\Lambda a^2}{3} - kc^2.$

In these equations $a(t)$ is the cosmic scale factor (which measures the relative size of the Universe) and dots are derivatives with respect to cosmological proper time, $t$. The density of matter is $\rho>0$ and the cosmological constant is $\Lambda$. The quantity $k$ is the curvature of the spatial sections of the model, i.e. the surfaces on which $t$ is constant.

Now our task is to find a solution of these equations with $a(t)= A$, say, constant for all time, i.e. that $\dot{a}=0$ and $\ddot{a}=0$ for all time.

The first thing to notice is that if $\Lambda=0$ then this is impossible. One can solve the second equation to make the LHS zero at a particular time by matching the density term to the curvature term, but that only makes a universe that is instantaneously static. The second derivative is non-zero in this case so the system inevitably evolves away from the situation in which $\dot{a}=0$.

With the cosmological constant term included, it is a different story. First make $\ddot{a}=0$  in the first equation, which means that

$\Lambda=4\pi G \rho.$

Now we can make $\dot{a}=0$ in the second equation by setting

$\Lambda a^2 = 4\pi G \rho a^2 = kc^2$

This gives a static universe model, usually called the Einstein universe. Notice that the curvature must be positive, so this a universe of finite spatial extent but with infinite duration.

This idea formed the basis of Einstein’s own cosmological thinking until the early 1930s when observations began to make it clear that the universe was not static at all, but expanding. In that light it seems that adding the cosmological constant wasn’t really justified, and it is often said that Einstein regard its introduction as his “biggest blunder”.

I have two responses to that. One is that general relativity, when combined with the cosmological principle, but without the cosmological constant, requires the universe to be dynamical rather than static. If anything, therefore, you could argue that Einstein’s biggest blunder was to have failed to predict the expansion of the Universe!

The other response is that, far from it being an ad hoc modification of his theory, there are actually sound mathematical reasons for allowing the cosmological constant term. Although Einstein’s original motivation for considering this possibility may have been misguided, he was justified in introducing it. He was right if, perhaps, for the wrong reasons. Nowadays observational evidence suggests that the expansion of the universe may be accelerating. The first equation above tells you that this is only possible if $\Lambda\neq 0$.

Finally, I’ll just mention another thing in the light of the Einstein (1917) paper. It is clear that Einstein thought of the cosmological as a modification of the left hand side of the field equations of general relativity, i.e. the part that expresses the effect of gravity through the curvature of space-time. Nowadays we tend to think of it instead as a peculiar form of energy (called dark energy) that has negative pressure. This sits on the right hand side of the field equations instead of the left so is not so much a modification of the law of gravity as an exotic form of energy. You can see the details in an older post here.

## The Dipole Repeller

Posted in The Universe and Stuff with tags , , , , , , on February 2, 2017 by telescoper

An interesting bit of local cosmology news has been hitting the headlines over the last few days. The story relates to a paper by Yehuda Hoffman et al. published in Nature Astronomy on 30th January. The abstract reads:

Our Local Group of galaxies is moving with respect to the cosmic microwave background (CMB) with a velocity 1 of VCMB = 631 ± 20 km s−1and participates in a bulk flow that extends out to distances of ~20,000 km s−1 or more 2,3,4 . There has been an implicit assumption that overabundances of galaxies induce the Local Group motion 5,6,7 . Yet underdense regions push as much as overdensities attract 8 , but they are deficient in light and consequently difficult to chart. It was suggested a decade ago that an underdensity in the northern hemisphere roughly 15,000 km s−1 away contributes significantly to the observed flow 9 . We show here that repulsion from an underdensity is important and that the dominant influences causing the observed flow are a single attractor — associated with the Shapley concentration — and a single previously unidentified repeller, which contribute roughly equally to the CMB dipole. The bulk flow is closely anti-aligned with the repeller out to 16,000 ± 4,500 km s−1. This ‘dipole repeller’ is predicted to be associated with a void in the distribution of galaxies.

The effect of this “void in the distribution of galaxies” has been described in rather lurid terms as “Milky Way being pushed through space by cosmic dead zone” in a Guardian piece on this research.

If you’re confused by this into thinking that some sort of anti-gravity is at play, then it isn’t really anything so exotic. If the Universe were completely homogeneous and isotropic – as our simplest models assume – then it would be expanding at the same rate in all directions.  This would be a pure “Hubble flow“, with galaxies appearing to recede from an observer with a speed proportional to their distance:

But the Universe isn’t exactly smooth. As well as the galaxies themselves, there are clusters, filaments and sheets of galaxies and a corresponding collection of void regions, together forming a huge and complex “cosmic web” of large-scale structure. This distorts the Hubble flow by inducing peculiar motions (i.e. departures from the pure expansion). A part of the Universe which is denser than average (e.g. a cluster or supercluster) expands less  quickly than average, a part which is less dense (i.e. a void) expands more quickly than average. Relative to the global expansion rate, clusters represent a “pull” and voids represent a “push”. That’s really all there is to it.

The difficult part about this kind of study is measuring a sufficient number of peculiar motions of galaxies around our own to make a detailed map of what’s going on in the local velocity field. That’s particularly hard for galaxies near the plane of the Milky Way disk as they tend to be obscured by dust. Nevertheless, after plugging away at this for many years, the authors of the Nature paper have generated some fascinating results. It seems that our Galaxy and other members of the Local Group lie between a dense supercluster (often called the Shapley concentration) and an underdense region, so the peculiar velocity field around us has an approximately dipole structure.

They’ve even made a nice video to show you what’s going on, so I don’t have to explain any further!

## Fake News of the Holographic Universe

Posted in Astrohype, The Universe and Stuff with tags , , , , , , on February 1, 2017 by telescoper

It has been a very busy day today but I thought I’d grab a few minutes to rant about something inspired by a cosmological topic but that I’m afraid is symptomatic of malaise that extends far wider than fundamental science.

The other day I found a news item with the title Study reveals substantial evidence of holographic universe. You can find a fairly detailed discussion of the holographic principle here, but the name is fairly self-explanatory: the familiar hologram is a two-dimensional object that contains enough information to reconstruct a three-dimensional object. The holographic principle extends this to the idea that information pertaining to a higher-dimensional space may reside on a lower-dimensional boundary of that space. It’s an idea which has gained some traction in the context of the black hole information paradox, for example.

There are people far more knowledgeable about the holographic principle than me, but naturally what grabbed my attention was the title of the news item: Study reveals substantial evidence of holographic universe. That got me really excited, as I wasn’t previously aware that there was any observed property of the Universe that showed any unambiguous evidence for the holographic interpretation or indeed that models based on this model could describe the available data better than the standard ΛCDM cosmological model. Naturally I went to the original paper on the arXiv by Niayesh Ashfordi et al. to which the news item relates. Here is the abstract:

We test a class of holographic models for the very early universe against cosmological observations and find that they are competitive to the standard ΛCDM model of cosmology. These models are based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and while they predict a different power spectrum from the standard power-law used in ΛCDM, they still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that ΛCDM does a better job globally, while the holographic models provide a (marginally) better fit to data without very low multipoles (i.e. l≲30), where the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT’s can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and potentially explain its apparent anomalies.

The third sentence (highlighted) states explicitly that according to the Bayesian evidence (see here for a review of this) the holographic models do not fit the data even as well as the standard model (unless some of the CMB measurements are excluded, and then they’re only slightly better)

I think the holographic principle is a very interesting idea and it may indeed at some point prove to provide a deeper understanding of our universe than our current models. Nevertheless it seems clear to me that the title of this news article is extremely misleading. Current observations do not really provide any evidence in favour of the holographic models, and certainly not “substantial evidence”.

The wider point should be obvious. We scientists rightly bemoan the era of “fake news”. We like to think that we occupy the high ground, by rigorously weighing up the evidence, drawing conclusions as objectively as possible, and reporting our findings with a balanced view of the uncertainties and caveats. That’s what we should be doing. Unless we do that we’re not communicating science but engaged in propaganda, and that’s a very dangerous game to play as it endangers the already fragile trust the public place in science.

The authors of the paper are not entirely to blame as they did not write the piece that kicked off this rant, which seems to have been produced by the press office at the University of Southampton, but they should not have consented to it being released with such a misleading title.

## How the Nonbaryonic Dark Matter Theory Grew [CEA]

Posted in The Universe and Stuff with tags , , on January 24, 2017 by telescoper

Another arXiver post, this time from the great Jim Peebles. Always a skeptic about dark matter, especially cold dark matter, it is the hallmark of a great scientist that he weighs up the evidence as objectively as possible.

This is a long review, but well worth reading for its important insights and historical perspective. I agree that the case for non-baryonic dark matter is compelling, but it is also far from proved and it’s still possible that an alternative, equally or more compelling, theory will be found.

http://arxiv.org/abs/1701.05837

The evidence is that the mass of the universe is dominated by an exotic nonbaryonic form of matter largely draped around the galaxies. It approximates an initially low pressure gas of particles that interact only with gravity, but we know little more than that. Searches for detection thus must follow many difficult paths to a great discovery, what the universe is made of. The nonbaryonic picture grew out of a convergence of evidence and ideas in the early 1980s. Developments two decades later considerably improved the evidence, and advances since then have made the case for nonbaryonic dark matter compelling.

P. Peebles
Mon, 23 Jan 17
37/55

Comments: An essay to accompany articles on dark matter detection in Nature Astronomy

View original post

## Status of Dark Matter in the Universe [CEA]

Posted in The Universe and Stuff with tags , on January 11, 2017 by telescoper

Courtesy of arXiver, here’s a nice review article if you want to get up to date with the latest ideas and evidence about Dark Matter…

http://arxiv.org/abs/1701.01840

Over the past few decades, a consensus picture has emerged in which roughly a quarter of the universe consists of dark matter. I begin with a review of the observational evidence for the existence of dark matter: rotation curves of galaxies, gravitational lensing measurements, hot gas in clusters, galaxy formation, primordial nucleosynthesis and cosmic microwave background observations. Then I discuss a number of anomalous signals in a variety of data sets that may point to discovery, though all of them are controversial. The annual modulation in the DAMA detector and/or the gamma-ray excess seen in the Fermi Gamma Ray Space Telescope from the Galactic Center could be due to WIMPs; a 3.5 keV X-ray line from multiple sources could be due to sterile neutrinos; or the 511 keV line in INTEGRAL data could be due to MeV dark matter. All of these would require further confirmation in other experiments…

View original post 92 more words

## Galaxy Formation in the EAGLE Project

Posted in The Universe and Stuff with tags , , , on December 8, 2016 by telescoper

Yesterday I went to a nice Colloquium by Rob Crain of Liverpool John Moores University (which is in the Midlands). Here’s the abstract of his talk which was entitled
Cosmological hydrodynamical simulations of the galaxy population:

I will briefly recap the motivation for, and progress towards, numerical modelling of the formation and evolution of the galaxy population – from cosmological initial conditions at early epochs through to the present day. I will introduce the EAGLE project, a flagship program of such simulations conducted by the Virgo Consortium. These simulations represent a major development in the discipline, since they are the first to broadly reproduce the key properties of the evolving galaxy population, and do so using energetically-feasible feedback mechanisms. I shall present a broad range of results from analyses of the EAGLE simulation, concerning the evolution of galaxy masses, their luminosities and colours, and their atomic and molecular gas content, to convey some of the strengths and limitations of the current generation of numerical models.

I added the link to the EAGLE project so you can find more information. As one of the oldies in the audience I can’t help remembering the old days of the galaxy formation simulation game. When I started my PhD back in 1985 the state of the art was a gravity-only simulation of 323 particles in a box. Nowadays one can manage about 20003 particles at the same time aas having a good go at dealing not only with gravity but also the complex hydrodynamical processes involved in assembling a galaxy of stars, gas, dust and dark matter from a set of primordial fluctuations present in the early Universe. In these modern simulations one does not just track the mass distribution but also various themrmodynamic properties such as temperature, pressure, internal energy and entropy, which means that they require large supercomputers. This certainly isn’t a solved problem – different groups get results that differ by an order of magnitude in some key predictions – but the game has certainly moved on dramatically in the past thirty years or so.

Another thing that has certainly improved a lot is data visualization: here is a video of one of the EAGLE simulations, showing a region of the Universe about 25 MegaParsecs across. The gas is colour-coded for temperature. As the simulation evolves you can see the gas first condense into the filaments of the Cosmic Web, thereafter forming denser knots in which stars form and become galaxies, experiencing in some cases explosive events which expel the gas. It’s quite a messy business, which is why one has to do these things numerically rather than analytically, but it’s certainly fun to watch!

## Does the fine structure constant vary?

Posted in The Universe and Stuff with tags , , on November 16, 2016 by telescoper