The Cosmic Tightrope

Here’s a thought experiment for you.

Imagine you are standing outside a sealed room. The contents of the room are hidden from you, except for a small window covered by a curtain. You are told that you can open the curtain once and only briefly to take a peep at what is inside, and you may do this whenever you feel the urge.

You are told what is in the room. It is bare except for a tightrope suspended across it about two metres in the air. Inside the room is a man who at some time in the past – you’re not told when – began walking along the tightrope. His instructions were to carry on walking backwards and forwards along the tightrope until he falls off, either through fatigue or lack of balance. Once he falls he must lie motionless on the floor.

You are not told whether he is skilled in tightrope-walking or not, so you have no way of telling whether he can stay on the rope for a long time or a short time. Neither are you told when he started his stint as a stuntman.

What do you expect to see when you eventually pull the curtain?

Well, if the man does fall off sometime it will clearly take him a very short time to drop to the floor. Once there he has to stay there.One outcome therefore appears very unlikely: that at the instant you open the curtain, you see him in mid-air between a rope and a hard place.

Whether you expect him to be on the rope or on the floor depends on information you do not have. If he is a trained circus artist, like the great Charles Blondin here, he might well be capable of walking to and fro along the tightrope for days. If not, he would probably only manage a few steps before crashing to the ground. Either way it remains unlikely that you catch a glimpse of him in mid-air during his downward transit. Unless, of course, someone is playing a trick on you and someone has told the guy to jump when he sees the curtain move.

This probably seems to have very little to do with physical cosmology, but now forget about tightropes and think about the behaviour of the mathematical models that describe the Big Bang. To keep things simple, I’m going to ignore the cosmological constant and just consider how things depend on one parameter, the density parameter Ω0. This is basically the ratio between the present density of the matter in the Universe compared to what it would have to be to cause the expansion of the Universe eventually to halt. To put it a slightly different way, it measures the total energy of the Universe. If Ω0>1 then the total energy of the Universe is negative: its (negative) gravitational potential energy dominates over the (positive) kinetic energy. If Ω0<1 then the total energy is positive: kinetic trumps potential. If Ω0=1 exactly then the Universe has zero total energy: energy is precisely balanced, like the man on the tightrope.

A key point, however, is that the trade-off between positive and negative energy contributions changes with time. The result of this is that Ω is not fixed at the same value forever, but changes with cosmic epoch; we use Ω0 to denote the value that it takes now, at cosmic time t0, but it changes with time.

At the beginning, at the Big Bang itself,  all the Friedmann models begin with Ω arbitrarily close to unity at arbitrarily early times, i.e. the limit as t tends to zero is Ω=1.

In the case in which the Universe emerges from the Big bang with a value of Ω just a tiny bit greater than one then it expands to a maximum at which point the expansion stops. During this process Ω grows without bound. Gravitational energy wins out over its kinetic opponent.

If, on the other hand, Ω sets out slightly less than unity – and I mean slightly, one part in 1060 will do – the Universe evolves to a state where it is very close to zero. In this case kinetic energy is the winner  and Ω ends up on the ground, mathematically speaking.

In the compromise situation with total energy zero, this exact balance always applies. The universe is always described by Ω=1. It walks the cosmic tightrope. But any small deviation early on results in runaway expansion or catastrophic recollapse. To get anywhere close to Ω=1 now – I mean even within a factor ten either way – the Universe has to be finely tuned.

A slightly different way of describing this is to think instead about the radius of curvature of the Universe. In general relativity the curvature of space is determined by the energy (and momentum) density. If the Universe has zero total energy it is flat, so it doesn’t have any curvature at all so its curvature radius is infinite. If it has positive total energy the curvature radius is finite and positive, in much the same way that a sphere has positive curvature. In the opposite case it has negative curvature, like a saddle. I’ve blogged about this before.

I hope you can now see how this relates to the curious case of the tightrope walker.

If the case Ω0= 1 applied to our Universe then we can conclude that something trained it to have a fine sense of equilibrium. Without knowing anything about what happened at the initial singularity we might therefore be pre-disposed to assign some degree of probability that this is the case, just as we might be prepared to imagine that our room contained a skilled practitioner of the art of one-dimensional high-level perambulation.

On the other hand, we might equally suspect that the Universe started off slightly over-dense or slightly under-dense, at which point it should either have re-collapsed by now or have expanded so quickly as to be virtually empty.

About fifteen years ago, Guillaume Evrard and I tried to put this argument on firmer mathematical grounds by assigning a sensible prior probability to Ω based on nothing other than the assumption that our Universe is described by a Friedmann model.

The result we got was that it should be of the form

P(\Omega) \propto \Omega^{-1}(\Omega-1)^{-1}.

I was very pleased with this result, which is based on a principle advanced by physicist Ed Jaynes, but I have no space to go through the mathematics here. Note, however, that this prior has three interesting properties: it is infinite at Ω=0 and Ω=1, and it has a very long “tail” for very large values of Ω. It’s not a very well-behaved measure, in the sense that it can’t be integrated over, but that’s not an unusual state of affairs in this game. In fact it is an improper prior.

I think of this prior as being the probabilistic equivalent of Mark Twain’s description of a horse:

dangerous at both ends, and uncomfortable in the middle.

Of course the prior probability doesn’t tell usall that much. To make further progress we have to make measurements, form a likelihood and then, like good Bayesians, work out the posterior probability . In fields where there is a lot of reliable data the prior becomes irrelevant and the likelihood rules the roost. We weren’t in that situation in 1995 – and we’re arguably still not – so we should still be guided, to some extent by what the prior tells us.

The form we found suggests that we can indeed reasonably assign most of our prior probability to the three special cases I have described. Since we also know that the Universe is neither totally empty nor ready to collapse, it does indicate that, in the absence of compelling evidence to the contrary, it is quite reasonable to have a prior preference for the case Ω=1.  Until the late 1980s there was indeed a strong ideological preference for models with Ω=1 exactly, but not because of the rather simple argument given above but because of the idea of cosmic inflation.

From recent observations we now know, or think we know, that Ω is roughly 0.26. To put it another way, this means that the Universe has roughly 26% of the density it would need to have to halt the cosmic expansion at some point in the future. Curiously, this corresponds precisely to the unlikely or “fine-tuned” case where our Universe is in between  two states in which we might have expected it to lie.

Even if you accept my argument that Ω=1 is a special case that is in principle possible, it is still the case that it requires the Universe to have been set up with very precisely defined initial conditions. Cosmology can always appeal to special initial conditions to get itself out of trouble because we don’t know how to describe the beginning properly, but it is much more satisfactory if properties of our Universe are explained by understanding the physical processes involved rather than by simply saying that “things are the way they are because they were the way they were.” The latter statement remains true, but it does not enhance our understanding significantly. It’s better to look for a more fundamental explanation because, even if the search is ultimately fruitless, we might turn over a few interesting stones along the way.

The reasoning behind cosmic inflation admits the possibility that, for a very short period in its very early stages, the Universe went through a phase where it was dominated by a third form of energy, vacuum energy. This forces the cosmic expansion to accelerate. This drastically changes the arguments I gave above. Without inflation the case with Ω=1 is unstable: a slight perturbation to the Universe sends it diverging towards a Big Crunch or a Big Freeze. While inflationary dynamics dominate, however, this case has a very different behaviour. Not only stable, it becomes an attractor to which all possible universes converge. Whatever the pre-inflationary initial conditions, the Universe will emerge from inflation with Ω very close to unity. Inflation trains our Universe to walk the tightrope.

So how can we reconcile inflation with current observations that suggest a low matter density? The key to this question is that what inflation really does is expand the Universe by such a large factor that the curvature radius becomes infinitesimally small. If there is only “ordinary” matter in the Universe then this requires that the universe have the critical density. However, in Einstein’s theory the curvature is zero only if the total energy is zero. If there are other contributions to the global energy budget besides that associated with familiar material then one can have a low value of the matter density as well as zero curvature. The missing link is dark energy, and the independent evidence we now have for it provides a neat resolution of this problem.

Or does it? Although spatial curvature doesn’t really care about what form of energy causes it, it is surprising to some extent that the dark matter and dark energy densities are similar. To many minds this unexplained coincidence is a blemish on the face of an otherwise rather attractive structure.

It can be argued that there are initial conditions for non-inflationary models that lead to a Universe like ours. This is true. It is not logically necessary to have inflation in order for the Friedmann models to describe a Universe like the one we live in. On the other hand, it does seem to be a reasonable argument that the set of initial data that is consistent with observations is larger in models with inflation than in those without it. It is rational therefore to say that inflation is more probable to have happened than the alternative.

I am not totally convinced by this reasoning myself, because we still do not know how to put a reasonable measure on the space of possibilities existing prior to inflation. This would have to emerge from a theory of quantum gravity which we don’t have. Nevertheless, inflation is a truly beautiful idea that provides a framework for understanding the early Universe that is both elegant and compelling. So much so, in fact, that I almost believe it.

Advertisements

24 Responses to “The Cosmic Tightrope”

  1. Anton Garrett Says:

    Was your prior in fact 1/{Ω|Ω-1|} so as not to go negative when Ω exceeds 1?

    I find inflation to be pragmatic, the very opposite of beautiful. At risk of repeating what I’ve said before, I’ll bet that Ω=1 someday falls out of a more fundamental theory as naturally as spin-half particles fall out of the Dirac equation. No dark energy epicycles.

    Anton

  2. telescoper Says:

    Good point. I’ve fixed it now. That was a result of my impatience with the limited ability the wordpress stuff has to deal with symbols. I think I’ve done it better now.

    I think “neat” would have been more apt that beautiful and I agree that there may be a more fundamental theory that requires large-scale flatness. It does seem odd that our universe is flat when it is supposed to described by a theory that generally requires space-time to be curved.

  3. Let’s restrict ourselves to the Omega>1 case. The proper comparison
    is not with a tightrope walker who falls of the rope and lands on the floor,
    but with one who falls off the rope, falls through the floor, through the
    Earth, retreats to an infinite distance and then returns to the tightrope.
    What’s more, this happens within a finite time. (In other words, look at
    how Omega changes with time, with Omega=1 being balanced on the
    tightrope and the distance below it proportional to the amount by which
    Omega can exceed 1.) Since we have a quantity which goes from 1 to
    infinity and back in a finite time, it is clear that not all values can be
    equally probable. In fact, it’s not that improbable that he is somewhere
    near the rope. Yes, there is a huge amount of space out to infinity, but
    all this is covered within a finite time, and more than one might think is
    spent not that far away from the tightrope. Thus, if we open the curtain, it
    is more likely than it seems at first glance that the tightrope walker is in
    the air. Thus, it seems to me that the “any value for Omega other than 1
    is so extremely unlikely that there must be some mechanism forcing
    Omega to be (almost) 1” argument doesn’t have the force most people
    think it does.

    On a related note, a point well made in Coles and Ellis is that all
    non-empty Friedmann models (including those with a cosmological
    constant) start out at the Einstein-de Sitter model (the only exceptions
    being the static Einstein universe and bounce models which contract
    from infinity to a finite radius before expanding again, but these are all
    ruled out by observations). In other words, WHATEVER the value of
    Omega today, if I go back far enough in time then it becomes arbitrarily
    close to 1. So the flatness “problem” always exists. The only difference
    is how far back in time I have to go. But, again as pointed out in Coles
    and Ellis, this should be seen more as “initial conditions” than as
    “fine tuning”.

  4. telescoper Says:

    I’m not sure whether you are agreeing or disagreeing with what I wrote. The measure in Omega certainly isn’t uniform in the region exceeding unity, but I never said it was.

    Perhaps an even simpler way to put this is just to choose the only physical length scale in the problem (the curvature radius) to have a logarithmic prior (i.e. to be invariant under multiplicative rescaling). Plugging this into the Friedman equations gives the same answer for Omega as I mentioned above.

  5. I’m not sure either whether I am agreeing or disagreeing. If you’re
    confused as well, then I must be on the right track. 🙂

    Another way of expressing what I mean is this: The standard argument
    for the flatness problem is “Omega could be anything; why should it be
    too close to 1?” If we assume that we occupy a “random” point in time,
    as opposed to a “random” value of Omega, then the answer is different.
    (Of course, in some sense this goes back to the Gott argument you blogged
    about recently.)

  6. telescoper Says:

    OK, put it this way that brings time into consideration. There are two scales: the curvature radius, which is fixed by the initial conditions and doesn’t change with time, and the horizon scale which is essentially ct. We can only detect the curvature when it is not very large compared to the horizon.

    What I’m saying is that a value of Omega of 0.2, say, requires the curvature scale to be comparable to the horizon scale when it is observed. Too early and the Universe looks flat, too late and it has either recollapsed or gone into free expansion depending on the sign of the curvature.

    I contend that values like this are very improbable on minimal information grounds: what tuned the universe to have these two scales comparable to each other?

    I think the cases where they are very different are obviously more likely: if the curvature radius is much larger than the horizon then the Universe looks flat. If it is much smaller than Omega is either vanishingly small or infinitely large.

  7. I don’t follow you when you say that the curvature radius doesn’t
    change with time. In general, it does: most universes start at
    Einstein-de Sitter, which is flat, then evolve away from (and later
    back to, if they recollapse) this point. In general, during this evolution
    there is a finite radius of curvature. I’m sure this is just a misunderstanding.

    (Of course, the trajectory in the Omega-lambda plane (determined
    by the values of these parameters at any time, since the evolutionary
    paths don’t cross) is determined by the initial conditions, and thus the
    radius of curvature as a function of time is determined by the initial
    conditions.)

    With regard to time: If we look at the value(s) of Omega (and lambda)
    as they change with time, then a value of, say, 0.2 becomes more likely
    if the universe spends more time in this region of parameter space than,
    say, in the region where Omega=1000000000000. In other words, there
    are many Omega values between, say, 1000 and 100000000000000, but
    the universe doesn’t spend that much time in this part of parameter
    space. If Omega is the “random” variable, then we would not expect it
    to have any “special” value. If, however, time of observation is the
    “random” variable, then our expectation for Omega is different.

  8. telescoper Says:

    I forgot to use the word “comoving” in my comment. The physical curvature radius just changes with the scale factor so is fixed in comoving coordinates, whereas the horizon grows in comoving coordinates.

    Converting to time inevitably brings Dicke’s anthropic argument into play. We know that life needs about 1010 years to get going as we need stars to make heavy elements. This means that the Universe can’t have gone into free expansion or have recollapsed before that sort of timescale. This means we can exclude some of the prior space using the observation that life exists.

  9. OK, insert “comoving” and it is clear.

    The question is whether one can exclude some more of the prior space than
    that which is excluded using the weak anthropic principle as in
    Dicke’s argument. In other words, all Omega values are not a priori
    equally likely in that those should be weighted down if they occur in the
    part of parameter space where the universe spends only a short period
    of its existence.

  10. Thomas D Says:

    Has anyone seriously tried to find a sensible measure on ‘initial conditions’ or ‘sets of initial data’ in the sense you use it here?

  11. Christopher Hagedorn Says:

    The most beautiful part of this is that if you look at the function: f(x)=(1/(x-1))/x, then f(phi) = 1. So there you have it, the golden ratio and the universal density constant are related.

  12. I have written up my thoughts on the flatness problem in a paper which has been accepted by Monthly Notices of the Royal Astronomical Society.

    Enjoy.

      • I have looked at the paper but the main line of reasoning is so far not clear… can you summarize the logical structure of the argument?

      • The new stuff concerns cosmological models which will collapse in the future. While it is true that (the absolute values of) the cosmological parameters lambda and Omega evolve to infinity (and back to (0,1)—the values in the Einstein-de Sitter model, which is where all non-empty big-bang models start in the lambda-Omega parameter space), for most of the lifetime of the universe the values are not particularly large. Thus, we shouldn’t be puzzled if we don’t observe large values. In other words, the tightrope-walker argument is qualitatively but not quantitatively correct.

        I recommend reading the cited works as well, not just because the paper is easier to understand if one is familiar with the cited works, but because the cited works themselves are well worth reading.

      • Blowing my own horn here to some extent (not ideal, but perhaps better than not having it blown at all), but I just came across an interesting paper which provides an additional, easy-to-understand argument against the existence of the flatness problem, in addition to mine and to those of Coles, Ellis, Evrard, and Lake which I cited in my flatness-problem paper:

        Thus the deviation during nucleosynthesis was only about 10−17, [ten to the minus seventeen] an impressively small number. It follows merely from the present density deviation, which is not necessarily very small, plus the cosmological equations and the definition of the critical density.

        A useful analogy from elementary physics might be the following: consider a test particle of mass m with total energy E falling into the Newtonian gravitational field of a mass M. The ratio of this particle’s kinetic energy K = mv2/2 [m times v squared over two] to its potential energy |U| = GMm/r is K/|U| = (E/GMm)r + 1. Note that the difference K/|U| − 1 becomes arbitrarily small as one approaches r → 0, in exactly the same way that T − 1 does in cosmology as t → 0. Yet one would hardly be justified in concluding from this that E “must be” zero on the grounds of naturalness.

        In summary, the extremely small deviation of the density ratio from unity in the early Universe is a consequence of the definition of the critical density and the basic equations of relativistic cosmology for any value of k. We therefore do not agree with the viewpoint that k = 0 is necessarily the most natural interpretation of current observational data. If future experiments produce a much smaller limit on the flatness parameter ε (say, 10−5), then that might be a more convincing indication that the most natural value for k is zero.

        Couldn’t have said it better myself.

        Let me point out two things. First, in both the cosmological case and in the example from Adler and Overduin quoted above, the “fine-tuning” exists whatever the value of Omega today, or whatever the initial velocity of the test particle. So, the fact that Omega is “still” not far from 1 today is a red herring. Second, thanks to Newtonian cosmology, the analogy between the elementary-physics example above and the cosmological case is much closer than many might think.

        I came across this paper because it cited one of the papers I cited in my flatness-problem paper, namely that of Lake (well worth checking out!).

        I recently read some lecture notes from someone who had posted a comment on a blog somewhere. The flatness-problem canard continues to be raised in almost all cosmology lectures. Forget about me; do people not even read what people like Overduin, Lake, and Ellis write (not to mention Coles)? I think most people don’t really think about it, but just quote it because the read it somewhere. Or because they are afraid of Rocky Kolb. 🙂 Interestingly, the original Peebles-and-Dicke claim never appeared in a refereed journal, but only in a volume of conference proceedings.

        As far as I know, no-one has refuted any of the anti-flatness-problem arguments mentioned in the papers by the authors above. Usually, when a wrong claim appears on arXiv, be it by some relative unknown who claims that the universe is screwy, or be it by the venerable Penrose talking about circles on the CMB, or for that matter Kellermann in Nature claiming that the standard-rod test supports the Einstein-de Sitter universe, it is usually refuted quickly and by several people independently.

        Finally, let me say that I don’t claim that no non-classical explanation is needed if the universe proves to be flat to one part in a million, say, rather than to within a per cent or so (current observational situation). Rather, my point is that the standard argument that classical cosmology leads to absurd conclusions which are manifested in the flatness problem (which was formulated at a time when, observationally, it was not at all clear that the universe is very close to flat, and also before inflation predicted a universe very close to flat) is due to a misinterpretation.

  13. Phillip Helbig Says:

    My mission* is to convince people that the tightrope-walker analogy is at best misleading in reference to the flatness problem.

    Here is an analogy which I hope is not misleading: Suppose someone actually got a rubber sheet, put weights on it, and from observations derived laws of motion. Note that “there does not exist a two-dimensional, cylindrically-symmetric surface that will yield rolling marble orbits that are equivalent to the particle orbits of Newtonian gravitation, or for particle orbits that arise in general relativity” (yes, someone actually did: http://arxiv.org/abs/1312.3893). Clearly, claiming that these observations say something about the real laws of motion would be false, a classic case of arguing too far from analogy.

    Actually, I think the tightrope-walker analogy is even worse because it is not only quantitatively false (the equation of motion of a falling tightrope walker, or almost-balanced pencil, is not analogous to the Friedmann equation), but also qualitatively false.

    ___________________________
    *Well, one of them.

  14. […] strike me that there’s a similarity with an interesting issue in cosmology that I’ve blogged about before (in different […]

  15. […] This paper makes a point that I have wondered about on a number of occasions. One of the problems, in my opinion, is that astrophysicists don’t think enough about their choice of prior. An improper prior is basically a statement of ignorance about the result one expects in advance of incoming data. However, very often we know more than we think we do. I’ve lost track of the number of papers I’ve seen in which the authors blithely assume a flat prior when that makes no sense whatsoever on the basis of what information is available and, indeed, on the structure of the model within which the data are to be interpreted. I discuss a simple example here. […]

  16. […] A comment elsewhere on this blog drew my attention to a paper on the arXiv by Marc Holman with the following abstract: […]

Leave a Reply to Phillip Helbig Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: