Archive for density parameter

MaxEnt 2016: Norton’s Dome and the Cosmological Density Parameter

Posted in The Universe and Stuff with tags , , , on July 11, 2016 by telescoper

The second in my sequence of posts tangentially related to talks at this meeting on Maximum Entropy and Bayesian Methods in Science and Engineering is inspired by a presentation this morning by Sylvia Wenmackers. The talk featured an example which was quite new to me called Norton’s Dome. There’s a full discussion of the implications of this example at John D. Norton’s own website, from which I have taken the following picture:

dome_with_eqn

This is basically a problem in Newtonian mechanics, in which a particle rolls down from the apex of a dome with a particular shape in response to a vertical gravitational field. The solution is well-determined and shown in the diagram.

An issue arises, however, when you consider the case where the particle starts at the apex of the dome with zero velocity. One solution in this case is that the particle stays put forever. However it can be shown that there are other solutions in which the particle sits at the top for an arbitrary (finite) time before rolling down. An example could be for example if the particle were launched up the dome from some point with just enough kinetic energy to reach the top where it is momentarily at rest, but then rolls down again.

Norton argues that this problem demonstrates a certain kind of indeterminism in Newtonian Mechanics. The mathematical problem with the specified initial conditions clearly has a solution in which the ball stays at the top forever. This solution is unstable, which is a familiar situation in mechanics, but this equilibrium has an unusual property related to the absence of Lipschitz continuity. One might expect that an infinitesimal asymmetric perturbation of the particle or the shape of the surface would be needed to send the particle rolling down the slope, but in this case it doesn’t. This is because there isn’t just one solution that has zero velocity at the equilibrium, but an entirely family as described above. This is both curious and interesting, and it does raise the question of how to define a probability measure that describes these solutions.

I don’t really want to go into the philosophical implications of this cute example, but it did strike me that there’s a similarity with an interesting issue in cosmology that I’ve blogged about before (in different terms).

This probably seems to have very little to do with physical cosmology, but now forget about domes and think instead about the behaviour of the mathematical models that describe the Big Bang. To keep things simple, I’m going to ignore the cosmological constant and just consider how things depend on one parameter, the density parameter Ω0. This is basically the ratio between the present density of the matter in the Universe compared to what it would have to be to cause the expansion of the Universe eventually to halt. To put it a slightly different way, it measures the total energy of the Universe. If Ω0>1 then the total energy of the Universe is negative: its (negative) gravitational potential energy dominates over the (positive) kinetic energy. If Ω0<1 then the total energy is positive: kinetic trumps potential. If Ω0=1 exactly then the Universe has zero total energy: energy is precisely balanced, like the man on the tightrope.

A key point, however, is that the trade-off between positive and negative energy contributions changes with time. The result of this is that Ω is not fixed at the same value forever, but changes with cosmic epoch; we use Ω0 to denote the value that it takes now, at cosmic time t0, but it changes with time.

At the beginning, i.e. at the Big Bang itself,  all the Friedmann models begin with Ω arbitrarily close to unity at arbitrarily early times, i.e. the limit as t tends to zero is Ω=1.

In the case in which the Universe emerges from the Big bang with a value of Ω just a tiny bit greater than one then it expands to a maximum at which point the expansion stops. During this process Ω grows without bound. Gravitational energy wins out over its kinetic opponent.

If, on the other hand, Ω sets out slightly less than unity – and I mean slightly, one part in 1060 will do – the Universe evolves to a state where it is very close to zero. In this case kinetic energy is the winner  and Ω ends up on the ground, mathematically speaking.

In the compromise situation with total energy zero, this exact balance always applies. The universe is always described by Ω=1. It walks the cosmic tightrope. But any small deviation early on results in runaway expansion or catastrophic recollapse. To get anywhere close to Ω=1 now – I mean even within a factor ten either way – the Universe has to be finely tuned.

The evolution of Ω  is neatly illustrated by the following phase-plane diagram (taken from an old paper by Madsen & Ellis) describing a cosmological model involving a perflect fluid with an equation of state p=(γ-1)ρc2. This is what happens for γ>2/3 (which includes dust, relativistic particles, etc):

Phase_plane_crop

The top panel shows how the density parameter evolves with scale factor S; the bottom panel shows a completion of this portrait obtained using a transformation that allows the point at infinity to be plotted on a finite piece of paper (or computer screen).

As discussed above this picture shows that all these Friedmann models begin at S=0 with Ω arbitrarily close to unity and that the value of Ω=1 is an unstable fixed point, just like the situation of the particle at the top of the dome. If the universe has Ω=1 exactly at some time then it will stay that way forever. If it is perturbed, however, then it will eventually diverge and end up collapsing (Ω>1) or going into free expansion (Ω<1).  The smaller the initial perturbation,  the longer the system stays close to Ω=1.

The fact that all trajectories start at Ω(S=0)=1 means that one has to be very careful in assigning some sort of probability measure on this parameter, ust as is the case with the Norton’s Dome problem I started with. About twenty years ago, Guillaume Evrard and I tried to put this argument on firmer mathematical grounds by assigning a sensible prior probability to Ω based on nothing other than the assumption that our Universe is described by a Friedmann model.

The result we got was that it should be of the form

P(\Omega) \propto \Omega^{-1}(\Omega-1)^{-1}.

I was very pleased with this result, which is based on a principle advanced by physicist Ed Jaynes, but I have no space to go through the mathematics here. Note, however, that this prior has three interesting properties: it is infinite at Ω=0 and Ω=1, and it has a very long “tail” for very large values of Ω. It’s not a very well-behaved measure, in the sense that it can’t be integrated over, but that’s not an unusual state of affairs in this game. In fact it is what is called an improper prior.

I think of this prior as being the probabilistic equivalent of Mark Twain’s description of a horse:

dangerous at both ends, and uncomfortable in the middle.

Of course the prior probability doesn’t tell usall that much. To make further progress we have to make measurements, form a likelihood and then, like good Bayesians, work out the posterior probability . In fields where there is a lot of reliable data the prior becomes irrelevant and the likelihood rules the roost. We weren’t in that situation in 1995 – and we’re arguably still not – so we should still be guided, to some extent by what the prior tells us.

The form we found suggests that we can indeed reasonably assign most of our prior probability to the three special cases I have described. Since we also know that the Universe is neither totally empty nor ready to collapse, it does indicate that, in the absence of compelling evidence to the contrary, it is quite reasonable to have a prior preference for the case Ω=1.  Until the late 1980s there was indeed a strong ideological preference for models with Ω=1 exactly, but not because of the rather simple argument given above but because of the idea of cosmic inflation.

From recent observations we now know, or think we know, that Ω is roughly 0.26. To put it another way, this means that the Universe has roughly 26% of the density it would need to have to halt the cosmic expansion at some point in the future. Curiously, this corresponds precisely to the unlikely or “fine-tuned” case where our Universe is in between  two states in which we might have expected it to lie.

Even if you accept my argument that Ω=1 is a special case that is in principle possible, it is still the case that it requires the Universe to have been set up with very precisely defined initial conditions. Cosmology can always appeal to special initial conditions to get itself out of trouble because we don’t know how to describe the beginning properly, but it is much more satisfactory if properties of our Universe are explained by understanding the physical processes involved rather than by simply saying that “things are the way they are because they were the way they were.” The latter statement remains true, but it does not enhance our understanding significantly. It’s better to look for a more fundamental explanation because, even if the search is ultimately fruitless, we might turn over a few interesting stones along the way.

The reasoning behind cosmic inflation admits the possibility that, for a very short period in its very early stages, the Universe went through a phase where it was dominated by a third form of energy, vacuum energy. This forces the cosmic expansion to accelerate; this means basically that the equation of state of the contents of the universe is described by γ<2/3 rather than the case γ>2/3 described above. This drastically changes the arguments I gave above.

Without inflation the case with Ω=1 is unstable: a slight perturbation to the Universe sends it diverging towards a Big Crunch or a Big Freeze. While inflationary dynamics dominate, however, this case has a very different behaviour. Not only stable, it becomes an attractor to which all possible universes converge. Here’s what the phase plane looks like in this case:

Phase_plane+2_crop

 

Whatever the pre-inflationary initial conditions, the Universe will emerge from inflation with Ω very close to unity.

So how can we reconcile inflation with current observations that suggest a low matter density? The key to this question is that what inflation really does is expand the Universe by such a large factor that the curvature radius becomes infinitesimally small. If there is only “ordinary” matter in the Universe then this requires that the universe have the critical density. However, in Einstein’s theory the curvature is zero only if the total energy is zero. If there are other contributions to the global energy budget besides that associated with familiar material then one can have a low value of the matter density as well as zero curvature. The missing link is dark energy, and the independent evidence we now have for it provides a neat resolution of this problem.

Or does it? Although spatial curvature doesn’t really care about what form of energy causes it, it is surprising to some extent that the dark matter and dark energy densities are similar. To many minds this unexplained coincidence is a blemish on the face of an otherwise rather attractive structure.

It can be argued that there are initial conditions for non-inflationary models that lead to a Universe like ours. This is true. It is not logically necessary to have inflation in order for the Friedmann models to describe a Universe like the one we live in. On the other hand, it does seem to be a reasonable argument that the set of initial data that is consistent with observations is larger in models with inflation than in those without it. It is rational therefore to say that inflation is more probable to have happened than the alternative.

I am not totally convinced by this reasoning myself, because we still do not know how to put a reasonable measure on the space of possibilities existing prior to inflation. This would have to emerge from a theory of quantum gravity which we don’t have. Nevertheless, inflation is a truly beautiful idea that provides a framework for understanding the early Universe that is both elegant and compelling. So much so, in fact, that I almost believe it.

 

Cosmology – Confusion on a Higher Level?

Posted in Biographical, The Universe and Stuff with tags , , , , , on January 19, 2015 by telescoper

I’ve already posted the picture below, which was taken at a conference in Leiden (Netherlands) in 1995. Various shady characters masquerading as “experts” were asked by the audience of graduate students at a summer school to give their favoured values for the cosmological parameters (from top to bottom: the Hubble constant, density parameter, cosmological constant, curvature parameter and age of the Universe).

From left to right we have Alain Blanchard (AB), Bernard Jones (BJ, standing), John Peacock (JP), me (yes, with a beard and a pony tail – the shame of it), Vincent Icke (VI), Rien van de Weygaert (RW) and Peter Katgert (PK, standing). You can see on the blackboard that the only one to get anywhere close to correctly predicting the parameters of what would become the standard cosmological model was, in fact, Rien van de Weygaert.

Well, my excuse for posting this again is the fact that a similar discussion was held at a meeting in Oslo (Norway) at which a panel of experts and Alan Heavens did a similar thing. I wasn’t there myself but grabbed the evidence from facebook:

experts

I’ll leave it as an exercise for the reader to identify the contributors. The 2015 version of the results is considerably more high-tech than the 1995 one, but in case you can’t read what is on the screen here are the responses:

panel_vote

The emphasis here is on possible departures from the standard model, whereas in 1995 the standard model hadn’t yet been established. I’m not sure exactly what questions were asked but I think my answers would have been: 3+1;  maybe; maybe; don’t know but (probably) not CDM; something indistinguishable from GR given current experiments; Lambda; and maybe. I’ve clearly become a skeptic in my old age.

Anyway, this “progress” reminded me of a quote I used to have on my office door when I was a graduate student in the Astronomy Centre at the University of Sussex many years ago:

We have not succeeded in answering all our problems. The answers we have found only serve to raise a whole set of new questions. In some ways we feel we are as confused as ever, but we believe we are confused on a higher level and about more important things.

The attribution of that quote is far from certain, but I was told that it was posted outside the mathematics reading room, Tromsø University. Which is in Norway. Apt, or what?

Death of a Cosmological Parameter

Posted in The Universe and Stuff with tags , , , on October 21, 2011 by telescoper

I’m sad to have to use the medium of this blog to report the tragic death of the Hubble parameter. It had been declining for some time and, despite appearing to pick up recently, the end was somewhat inevitable. Condolences to the other parameters, especially Ω (who was in a close relationship with H), on this sad loss.

The original photograph (and joke) may be found here.

 

False Convergence and the Bandwagon Effect

Posted in The Universe and Stuff with tags , , , , , , on July 3, 2011 by telescoper

In idle moments, such as can be found during sunny sunday summer afternoons in the garden, it’s  interesting to reminisce about things you worked on in the past. Sometimes such trips down memory lane turn up some quite interesting lessons for the present, especially when you look back at old papers which were published when the prevailing paradigms were different. In this spirit I was lazily looking through some old manuscripts on an ancient laptop I bought in 1993. I thought it was bust, but it turns out to be perfectly functional; they clearly made things to last in those days! I found a paper by Plionis et al. which I co-wrote in 1992; the abstract is here

We have reanalyzed the QDOT survey in order to investigate the convergence properties of the estimated dipole and the consequent reliability of the derived value of \Omega^{0.6}/b. We find that there is no compelling evidence that the QDOT dipole has converged within the limits of reliable determination and completeness. The value of  \Omega_0 derived by Rowan-Robinson et al. (1990) should therefore be considered only as an upper limit. We find strong evidence that the shell between 140 and 160/h Mpc does contribute significantly to the total dipole anisotropy, and therefore to the motion of the Local Group with respect to the cosmic microwave background. This shell contains the Shapley concentration, but we argue that this concentration itself cannot explain all the gravitational acceleration produced by it; there must exist a coherent anisotropy which includes this structure, but extends greatly beyond it. With the QDOT data alone, we cannot determine precisely the magnitude of any such anisotropy.

(I’ve added a link to the Rowan-Robinson et al. paper for reference). This was  a time long before the establishment of the current standard model of cosmology (“ΛCDM”) and in those days the favoured theoretical paradigm was a flat universe, but one without a cosmological constant but with a critical density of matter, corresponding to a value of the density parameter \Omega_0 =1.

In the late eighties and early nineties, a large number of observational papers emerged claiming to provide evidence for the (then) standard model, the Rowan-Robinson et al. paper being just one. The idea behind this analysis is very neat. When we observe the cosmic microwave background we find it has a significant variation in temperature across the sky on a scale of 180°, i.e. it has a strong dipole component

There is also some contamination from Galactic emission in the middle, but you can see the dipole in the above map from COBE. The interpretation of this is that the Earth is not at rest. The  temperature variation causes by our motion with respect to a frame in which the cosmic microwave background (CMB) would be isotropic (i.e. be the same temperature everywhere on the sky) is just \Delta T/T \sim v/c. However, the Earth moves around the Sun. The Sun orbits the center of the Milky Way Galaxy. The Milky Way Galaxy orbits in the Local Group of Galaxies. The Local Group falls toward the Virgo Cluster of Galaxies. We know these velocities pretty well, but they don’t account for the size of the observed dipole anisotropy. The extra bit must be due the gravitational pull of larger scale structures.

If one can map the distribution of galaxies over the whole sky, as was first done with the QDOT galaxy redshift survey, then one can compare the dipole expected from the distribution of galaxies with that measured using the CMB. We can only count the galaxies – we don’t know how much mass is associated with each one but if we find that the CMB and the galaxy dipole line up in direction we can estimate the total amount of mass needed to give the right magnitude. I refer you to the papers for details.

Rowan-Robinson et al. argued that the QDOT galaxy dipole reaches convergence with the CMB dipole (i.e. they line up with one another) within a relatively small volume – small by cosmological standards, I mean, i.e. 100 Mpc or so- which means that  there has to be quite a lot of mass in that small volume to generate the relatively large velocity indicated by the CMB dipole. Hence the result is taken to indicate a high density universe.

In our paper we questioned whether convergence had actually been reached within the QDOT sample. This is crucial because if there is significant structure beyond the scale encompassed by the survey a lower overall density of matter may be indicated. We looked at a deeper survey (of galaxy clusters) and found evidence of a large-scale structure (up to 200 Mpc) that was lined up with the smaller scale anisotropy found by the earlier paper. Our best estimate was \Omega_0\sim 0.3, with a  large uncertainty. Now, 20 years later, we have a  different standard cosmology which does indeed have \Omega_0 \simeq 0.3. We were right.

Now I’m not saying that there was anything actually wrong with the Rowan-Robinson et al. paper – the uncertainties in their analysis are clearly stated, in the body of the paper as well as in the abstract. However, that result was widely touted as evidence for a high-density universe which was an incorrect interpretation. Many other papers published at the time involved similar misinterpretations. It’s good to have a standard model, but it can lead to a publication bandwagon – papers that agree with the paradigm get published easily, while those that challenge it (and are consequently much more interesting) struggle to make it past referees. The accumulated weight of evidence in cosmology is much stronger now than it was in 1990, of course, so the standard model is a more robust entity than the corresponding version of twenty years ago. Nevertheless, there’s still a danger that by treating ΛCDM as if it were the absolute truth, we might be closing our eyes to precisely those clues that will lead us to an even better understanding.  The perils of false convergence  are real even now.

As a grumpy postscript, let me just add that Plionis et al. has attracted a meagre 18 citations whereas Rowan-Robinson et al. has 178. Being right doesn’t always get you cited.

The Evidence

Posted in Biographical, The Universe and Stuff with tags , , , on September 25, 2009 by telescoper

Further to my recent post about the evidence for a low-density Universe, I thought I’d embarrass all concerned with this image, taken in Leiden in 1995.

Various shady characters masquerading as “experts” were asked by the audience of graduate students at a summer school to give their favoured values for the cosmological parameters (from top to bottom: the Hubble constant, density parameter, cosmological constant, curvature parameter and age of the Universe).

From left to right we have Alain Blanchard (AB), Bernard Jones (BJ, standing), John Peacock (JP), me (yes, with a beard and a pony tail – the shame of it), Vincent Icke (VI), Rien van de Weygaert (RW) and Peter Katgert (PK, standing). You can see on the blackboard that the only one to get anywhere close to correctly predicting the parameters of what would become the standard cosmological model was, in fact, Rien van de Weygaert.

The Cosmic Tightrope

Posted in The Universe and Stuff with tags , , on May 3, 2009 by telescoper

Here’s a thought experiment for you.

Imagine you are standing outside a sealed room. The contents of the room are hidden from you, except for a small window covered by a curtain. You are told that you can open the curtain once and only briefly to take a peep at what is inside, and you may do this whenever you feel the urge.

You are told what is in the room. It is bare except for a tightrope suspended across it about two metres in the air. Inside the room is a man who at some time in the past – you’re not told when – began walking along the tightrope. His instructions were to carry on walking backwards and forwards along the tightrope until he falls off, either through fatigue or lack of balance. Once he falls he must lie motionless on the floor.

You are not told whether he is skilled in tightrope-walking or not, so you have no way of telling whether he can stay on the rope for a long time or a short time. Neither are you told when he started his stint as a stuntman.

What do you expect to see when you eventually pull the curtain?

Well, if the man does fall off sometime it will clearly take him a very short time to drop to the floor. Once there he has to stay there.One outcome therefore appears very unlikely: that at the instant you open the curtain, you see him in mid-air between a rope and a hard place.

Whether you expect him to be on the rope or on the floor depends on information you do not have. If he is a trained circus artist, like the great Charles Blondin here, he might well be capable of walking to and fro along the tightrope for days. If not, he would probably only manage a few steps before crashing to the ground. Either way it remains unlikely that you catch a glimpse of him in mid-air during his downward transit. Unless, of course, someone is playing a trick on you and someone has told the guy to jump when he sees the curtain move.

This probably seems to have very little to do with physical cosmology, but now forget about tightropes and think about the behaviour of the mathematical models that describe the Big Bang. To keep things simple, I’m going to ignore the cosmological constant and just consider how things depend on one parameter, the density parameter Ω0. This is basically the ratio between the present density of the matter in the Universe compared to what it would have to be to cause the expansion of the Universe eventually to halt. To put it a slightly different way, it measures the total energy of the Universe. If Ω0>1 then the total energy of the Universe is negative: its (negative) gravitational potential energy dominates over the (positive) kinetic energy. If Ω0<1 then the total energy is positive: kinetic trumps potential. If Ω0=1 exactly then the Universe has zero total energy: energy is precisely balanced, like the man on the tightrope.

A key point, however, is that the trade-off between positive and negative energy contributions changes with time. The result of this is that Ω is not fixed at the same value forever, but changes with cosmic epoch; we use Ω0 to denote the value that it takes now, at cosmic time t0, but it changes with time.

At the beginning, at the Big Bang itself,  all the Friedmann models begin with Ω arbitrarily close to unity at arbitrarily early times, i.e. the limit as t tends to zero is Ω=1.

In the case in which the Universe emerges from the Big bang with a value of Ω just a tiny bit greater than one then it expands to a maximum at which point the expansion stops. During this process Ω grows without bound. Gravitational energy wins out over its kinetic opponent.

If, on the other hand, Ω sets out slightly less than unity – and I mean slightly, one part in 1060 will do – the Universe evolves to a state where it is very close to zero. In this case kinetic energy is the winner  and Ω ends up on the ground, mathematically speaking.

In the compromise situation with total energy zero, this exact balance always applies. The universe is always described by Ω=1. It walks the cosmic tightrope. But any small deviation early on results in runaway expansion or catastrophic recollapse. To get anywhere close to Ω=1 now – I mean even within a factor ten either way – the Universe has to be finely tuned.

A slightly different way of describing this is to think instead about the radius of curvature of the Universe. In general relativity the curvature of space is determined by the energy (and momentum) density. If the Universe has zero total energy it is flat, so it doesn’t have any curvature at all so its curvature radius is infinite. If it has positive total energy the curvature radius is finite and positive, in much the same way that a sphere has positive curvature. In the opposite case it has negative curvature, like a saddle. I’ve blogged about this before.

I hope you can now see how this relates to the curious case of the tightrope walker.

If the case Ω0= 1 applied to our Universe then we can conclude that something trained it to have a fine sense of equilibrium. Without knowing anything about what happened at the initial singularity we might therefore be pre-disposed to assign some degree of probability that this is the case, just as we might be prepared to imagine that our room contained a skilled practitioner of the art of one-dimensional high-level perambulation.

On the other hand, we might equally suspect that the Universe started off slightly over-dense or slightly under-dense, at which point it should either have re-collapsed by now or have expanded so quickly as to be virtually empty.

About fifteen years ago, Guillaume Evrard and I tried to put this argument on firmer mathematical grounds by assigning a sensible prior probability to Ω based on nothing other than the assumption that our Universe is described by a Friedmann model.

The result we got was that it should be of the form

P(\Omega) \propto \Omega^{-1}(\Omega-1)^{-1}.

I was very pleased with this result, which is based on a principle advanced by physicist Ed Jaynes, but I have no space to go through the mathematics here. Note, however, that this prior has three interesting properties: it is infinite at Ω=0 and Ω=1, and it has a very long “tail” for very large values of Ω. It’s not a very well-behaved measure, in the sense that it can’t be integrated over, but that’s not an unusual state of affairs in this game. In fact it is an improper prior.

I think of this prior as being the probabilistic equivalent of Mark Twain’s description of a horse:

dangerous at both ends, and uncomfortable in the middle.

Of course the prior probability doesn’t tell usall that much. To make further progress we have to make measurements, form a likelihood and then, like good Bayesians, work out the posterior probability . In fields where there is a lot of reliable data the prior becomes irrelevant and the likelihood rules the roost. We weren’t in that situation in 1995 – and we’re arguably still not – so we should still be guided, to some extent by what the prior tells us.

The form we found suggests that we can indeed reasonably assign most of our prior probability to the three special cases I have described. Since we also know that the Universe is neither totally empty nor ready to collapse, it does indicate that, in the absence of compelling evidence to the contrary, it is quite reasonable to have a prior preference for the case Ω=1.  Until the late 1980s there was indeed a strong ideological preference for models with Ω=1 exactly, but not because of the rather simple argument given above but because of the idea of cosmic inflation.

From recent observations we now know, or think we know, that Ω is roughly 0.26. To put it another way, this means that the Universe has roughly 26% of the density it would need to have to halt the cosmic expansion at some point in the future. Curiously, this corresponds precisely to the unlikely or “fine-tuned” case where our Universe is in between  two states in which we might have expected it to lie.

Even if you accept my argument that Ω=1 is a special case that is in principle possible, it is still the case that it requires the Universe to have been set up with very precisely defined initial conditions. Cosmology can always appeal to special initial conditions to get itself out of trouble because we don’t know how to describe the beginning properly, but it is much more satisfactory if properties of our Universe are explained by understanding the physical processes involved rather than by simply saying that “things are the way they are because they were the way they were.” The latter statement remains true, but it does not enhance our understanding significantly. It’s better to look for a more fundamental explanation because, even if the search is ultimately fruitless, we might turn over a few interesting stones along the way.

The reasoning behind cosmic inflation admits the possibility that, for a very short period in its very early stages, the Universe went through a phase where it was dominated by a third form of energy, vacuum energy. This forces the cosmic expansion to accelerate. This drastically changes the arguments I gave above. Without inflation the case with Ω=1 is unstable: a slight perturbation to the Universe sends it diverging towards a Big Crunch or a Big Freeze. While inflationary dynamics dominate, however, this case has a very different behaviour. Not only stable, it becomes an attractor to which all possible universes converge. Whatever the pre-inflationary initial conditions, the Universe will emerge from inflation with Ω very close to unity. Inflation trains our Universe to walk the tightrope.

So how can we reconcile inflation with current observations that suggest a low matter density? The key to this question is that what inflation really does is expand the Universe by such a large factor that the curvature radius becomes infinitesimally small. If there is only “ordinary” matter in the Universe then this requires that the universe have the critical density. However, in Einstein’s theory the curvature is zero only if the total energy is zero. If there are other contributions to the global energy budget besides that associated with familiar material then one can have a low value of the matter density as well as zero curvature. The missing link is dark energy, and the independent evidence we now have for it provides a neat resolution of this problem.

Or does it? Although spatial curvature doesn’t really care about what form of energy causes it, it is surprising to some extent that the dark matter and dark energy densities are similar. To many minds this unexplained coincidence is a blemish on the face of an otherwise rather attractive structure.

It can be argued that there are initial conditions for non-inflationary models that lead to a Universe like ours. This is true. It is not logically necessary to have inflation in order for the Friedmann models to describe a Universe like the one we live in. On the other hand, it does seem to be a reasonable argument that the set of initial data that is consistent with observations is larger in models with inflation than in those without it. It is rational therefore to say that inflation is more probable to have happened than the alternative.

I am not totally convinced by this reasoning myself, because we still do not know how to put a reasonable measure on the space of possibilities existing prior to inflation. This would have to emerge from a theory of quantum gravity which we don’t have. Nevertheless, inflation is a truly beautiful idea that provides a framework for understanding the early Universe that is both elegant and compelling. So much so, in fact, that I almost believe it.