Archive for Robertson-Walker metric

Faster Than The Speed of Light?

Posted in The Universe and Stuff with tags , , , , , on January 5, 2015 by telescoper

Back to the office after starting out early to make the long journey back to Brighton from Cardiff, all of which went smoothly for a change. I’ve managed to clear some of the jobs waiting for me on my return from the Christmas holidays so thought I’d take my lunch break and write a quick blog post. I hasten to add, however, that the title isn’t connected in any way with the speed of this morning’s train, which never at any point threatened causality.

What spurred me on to write this piece was an exchange on Twitter, featuring the inestimable Sean Carroll who delights in getting people to suggest physics for him to explain in fewer than three tweets. It’s a tough job sometimes, but he usually does it brilliantly. Anyway, the third of his tweets about the size of the (observable universe), and my rather pedantic reply to it, both posted on New Year’s Day, were as follows:

I thought I’d take the opportunity to explain in a little bit more detail how and why it can be that the size of the observable universe is significantly larger than what one naively imagine, i.e. (the speed of light) ×(time elapsed since the Big Bang) = ct, for short. I’ve been asked about this before but never really had the time to respond.

Let’s start with some basic cosmological concepts which, though very familar, lead to some quite surprising conclusions.  First of all, consider the Hubble law, which I will write in the form

v=HR

It’s not sufficiently widely appreciated that for a suitable definition of the recession velocity v and distance R, this expression is exact for any velocity, even one much greater than the speed of light! This doesn’t violate any principle of relativity as long as one is careful with the definition.

Let’s start with time. The assumption of the Cosmological Principle, that the Universe is homogeneous and isotropic on large scales, furnishes a preferred time coordinate, usually called cosmoloogical proper time, or cosmic time, defined in such a way that observers in different locations can set their clocks according to the local density of matter. This allows us to slice the four-dimensional space-time of the Universe into three spatial dimensions of one dimension of time in a particularly elegant way.

The geometry of space-time can now be expressed in terms of the Robertson-Walker metric. To avoid unnecessary complications, and because it seems to be how are Universe is, as far as we can tell, I’ll restrict myself to the case where the spatial sections are flat (ie they have Euclidean geometry). This the metric is:

ds^{2}=c^{2}dt^{2} - a^{2}(t) \left[ d{r}^2 + r^{2}d\Omega^{2} \right]

Where s is a four-dimensional interval t is cosmological proper time as defined above, r is a radial coordinate and \Omega defines angular position (the observer is assumed to be at the origin). The function a(t) is called the cosmic scale factor, and it describes the time-evolution of the spatial part of the metric; the coordinate r of an object moving with the cosmic expansion does not change with time, but the proper distance of such an object evolves according to

R=a(t)r

The name “proper” here relates to the fact that this definition of distance corresponds to an interval defined instantaneously (ie one with dt=0). We can’t actually measure such intervals; the best we can do is measure things using signals of some sort, but the notion is very useful in keeping the equations simple and it is perfectly well-defined as long as you stay aware of what it does and does not mean. The other thing we need to know is that the Big Bang is supposed to have happened at dt=0 at which point a(t)=0 too.

 

If we now define the proper velocity of an object comoving with the expansion of the Universe to be

v=\frac{dR}{dt}=\left(\frac{da}{dt} \right)r = \left(\frac{\dot{a}}{a}\right) R = HR

This is the form of the Hubble law that applies for any velocity and any distance. That does not mean, however, that one can work out the redshift of a source by plugging this velocity into the usual Doppler formula, for reasons that I hope will become obvious.

The specific case ds=0 is what we need here, as that describes the path of a light ray (null geodesic); if we only follow light rays travelling radially towards or away from the origin, the former being of greatest relevance to observational cosmology, then we can set d\Omega=0 too and find:

dr =\frac{cdt}{a(t)}

Now to the nub of it. How do we define the size of the observable universe? The best way to answer this is in terms of the particle horizon which, in a nutshell, is defined so that a particle on the particle horizon at the present cosmic time is the most distant object that an observer at the origin can ever have received a light signal from in the entire history of the Universe. The horizon in Robertson-Walker geometry will be a sphere, centred on the origin, with some coordinate radius. The radius of this horizon will increase in time, in a manner that can be calculated by integrating the previous expression from t=0 to t=t_0, the current age of the Universe:

r_p(t_0)=\int_{0}^{t_0} \frac{cdt}{a(t)}.

For any old cosmological model this has to be integrated by solving for the denominator as a function of time using the Friedmann equations, usually numerically. However, there is a special case we can do trivially which demonstrates all the salient points. The matter-dominated Einstein- de Sitter model is flat and has the solution

a(t)\propto t^{2/3}

so that

\frac{a(t)}{a(t_0)} = \left(\frac{t}{t_0}\right)^{2/3}

Plugging this into the integral and using the above definitions we find that in this model the present proper distance of an object on our particle horizon is

R_p = 3ct_{0}

 

By the way, some cosmologists prefer to use a different definition of the horizon, called the Hubble sphere. This is the sphere on which objects are moving away from the observer according to the Hubble law at exactly the velocity of light. For the Einstein-de Sitter cosmology the Hubble parameter is easily found

H(t)=\frac{2}{3t} \rightarrow R_{c}= \frac{3}{2} ct_{0}.

Notice that velocities in this model are always decaying, so in it the expansion is not accelerating but decelerating, hence my comment on Twitter above. The apparent paradox therefore has nothing to do with acceleration, although the particle horizon does get a bit bigger in models with, e.g., a cosmological constant in which the expansion accelerates at late times. In the current standard cosmological model the radius of the particle horizon is about 46 billion light years for an age of 13.7 billion years, which is just 10% larger than in the Einstein de Sitter case.

There is no real contradiction with relativity here because the structure of the metric encodes all the requirements of causality. It is true that there are objects moving away from the origin at proper velocities faster than that of light, but we can’t make instantaneous measurements of cosmological distances; what we observe is their redshifted light. In other words we can’t make measurements of intervals with dt=0 we have to use light rays, which follow paths with ds=0, i.e. we have to make observations down our past light cone. Nevertheless, there are superluminal velocities, in the sense I have defined them above, in standard cosmological models. Indeed, these velocities all diverge at t =0. Blame it all on the singularity!

This figure made by Mark Whittle (University of Virginia) shows our past light cone in the present standard cosmological model:

t16_three_distances_4

If you were expectin the past light cone to look triangular in cross-section then you’re probably thinking of Minkowski space, or a representation involving coordinates chosen to resemble Minkowski space. Cosmological If you look at the left hand side of the figure, you will find the world lines of particles moving with the cosmic expansion labelled by their present proper distance which is obtained by extrapolating the dotted lines until they intersect a line parallel to the x-axis running through “Here & Now”.  Where we actually see these objects is not at their present proper distance but at the point in space-time where their world line intersects the past light cone.  You will see that an object on the particle horizon intersected our past light cone right at the bottom of the figure.

So why does the light cone look so peculiar? Well, I think the simplest way to explain it is to say that while the spatial sections in this model are flat (Euclidean) the four-dimensional geometry is most definitely curved. You can think of the bending of light rays shown in the figure as a kind of gravitational lensing effect due to all the matter in the Universe. I’d say that the fact that the particle horizon has a radius larger than ct is not because of acceleration but the curvature of space-time, an assertion consistent with the fact that the only familiar world model in which this effect does not occur is the (empty) purely kinemetic Milne cosmology, which is based entirely on special relativity.

 

 

The Importance of Being Homogeneous

Posted in The Universe and Stuff with tags , , , , , , , , on August 29, 2012 by telescoper

A recent article in New Scientist reminded me that I never completed the story I started with a couple of earlier posts (here and there), so while I wait for the rain to stop I thought I’d make myself useful by posting something now. It’s all about a paper available on the arXiv by Scrimgeour et al. concerning the transition to homogeneity of galaxy clustering in the WiggleZ galaxy survey, the abstract of which reads:

We have made the largest-volume measurement to date of the transition to large-scale homogeneity in the distribution of galaxies. We use the WiggleZ survey, a spectroscopic survey of over 200,000 blue galaxies in a cosmic volume of ~1 (Gpc/h)^3. A new method of defining the ‘homogeneity scale’ is presented, which is more robust than methods previously used in the literature, and which can be easily compared between different surveys. Due to the large cosmic depth of WiggleZ (up to z=1) we are able to make the first measurement of the transition to homogeneity over a range of cosmic epochs. The mean number of galaxies N(<r) in spheres of comoving radius r is proportional to r^3 within 1%, or equivalently the fractal dimension of the sample is within 1% of D_2=3, at radii larger than 71 \pm 8 Mpc/h at z~0.2, 70 \pm 5 Mpc/h at z~0.4, 81 \pm 5 Mpc/h at z~0.6, and 75 \pm 4 Mpc/h at z~0.8. We demonstrate the robustness of our results against selection function effects, using a LCDM N-body simulation and a suite of inhomogeneous fractal distributions. The results are in excellent agreement with both the LCDM N-body simulation and an analytical LCDM prediction. We can exclude a fractal distribution with fractal dimension below D_2=2.97 on scales from ~80 Mpc/h up to the largest scales probed by our measurement, ~300 Mpc/h, at 99.99% confidence.

To paraphrase, the conclusion of this study is that while galaxies are strongly clustered on small scales – in a complex `cosmic web’ of clumps, knots, sheets and filaments –  on sufficiently large scales, the Universe appears to be smooth. This is much like a bowl of porridge which contains many lumps, but (usually) none as large as the bowl it’s put in.

Our standard cosmological model is based on the Cosmological Principle, which asserts that the Universe is, in a broad-brush sense, homogeneous (is the same in every place) and isotropic (looks the same in all directions). But the question that has troubled cosmologists for many years is what is meant by large scales? How broad does the broad brush have to be?

I blogged some time ago about that the idea that the  Universe might have structure on all scales, as would be the case if it were described in terms of a fractal set characterized by a fractal dimension D. In a fractal set, the mean number of neighbours of a given galaxy within a spherical volume of radius R is proportional to R^D. If galaxies are distributed uniformly (homogeneously) then D = 3, as the number of neighbours simply depends on the volume of the sphere, i.e. as R^3, and the average number-density of galaxies. A value of D < 3 indicates that the galaxies do not fill space in a homogeneous fashion: D = 1, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as R^1, not as its volume; galaxies distributed in sheets would have D=2, and so on.

We know that D \simeq 1.2 on small scales (in cosmological terms, still several Megaparsecs), but the evidence for a turnover to D=3 has not been so strong, at least not until recently. It’s just just that measuring D from a survey is actually rather tricky, but also that when we cosmologists adopt the Cosmological Principle we apply it not to the distribution of galaxies in space, but to space itself. We assume that space is homogeneous so that its geometry can be described by the Friedmann-Lemaitre-Robertson-Walker metric.

According to Einstein’s  theory of general relativity, clumps in the matter distribution would cause distortions in the metric which are roughly related to fluctuations in the Newtonian gravitational potential \delta\Phi by \delta\Phi/c^2 \sim \left(\lambda/ct \right)^{2} \left(\delta \rho/\rho\right), give or take a factor of a few, so that a large fluctuation in the density of matter wouldn’t necessarily cause a large fluctuation of the metric unless it were on a scale \lambda reasonably large relative to the cosmological horizon \sim ct. Galaxies correspond to a large \delta \rho/\rho \sim 10^6 but don’t violate the Cosmological Principle because they are too small in scale \lambda to perturb the background metric significantly.

The discussion of a fractal universe is one I’m overdue to return to. In my previous post  I left the story as it stood about 15 years ago, and there have been numerous developments since then, not all of them consistent with each other. I will do a full “Part 2” to that post eventually, but in the mean time I’ll just comment that this particularly one does seem to be consistent with a Universe that possesses the property of large-scale homogeneity. If that conclusion survives the next generation of even larger galaxy redshift surveys then it will come as an immense relief to cosmologists.

The reason for that is that the equations of general relativity are very hard to solve in cases where there isn’t a lot of symmetry; there are just too many equations to solve for a general solution to be obtained.  If the cosmological principle applies, however, the equations simplify enormously (both in number and form) and we can get results we can work with on the back of an envelope. Small fluctuations about the smooth background solution can be handled (approximately but robustly) using a technique called perturbation theory. If the fluctuations are large, however, these methods don’t work. What we need to do instead is construct exact inhomogeneous model, and that is very very hard. It’s of course a different question as to why the Universe is so smooth on large scales, but as a working cosmologist the real importance of it being that way is that it makes our job so much easier than it would otherwise be.

P.S. And I might add that the importance of the Scrimgeour et al paper to me personally is greatly amplified by the fact that it cites a number of my own articles on this theme!

Cosmic Clumpiness Conundra

Posted in The Universe and Stuff with tags , , , , , , , , , , , , , , on June 22, 2011 by telescoper

Well there’s a coincidence. I was just thinking of doing a post about cosmological homogeneity, spurred on by a discussion at the workshop I attended in Copenhagen a couple of weeks ago, when suddenly I’m presented with a topical hook to hang it on.

New Scientist has just carried a report about a paper by Shaun Thomas and colleagues from University College London the abstract of which reads

We observe a large excess of power in the statistical clustering of luminous red galaxies in the photometric SDSS galaxy sample called MegaZ DR7. This is seen over the lowest multipoles in the angular power spectra Cℓ in four equally spaced redshift bins between 0.4 \leq z \leq 0.65. However, it is most prominent in the highest redshift band at z\sim 4\sigma and it emerges at an effective scale k \sim 0.01 h{\rm Mpc}^{-1}. Given that MegaZ DR7 is the largest cosmic volume galaxy survey to date (3.3({\rm Gpc} h^{-1})^3) this implies an anomaly on the largest physical scales probed by galaxies. Alternatively, this signature could be a consequence of it appearing at the most systematically susceptible redshift. There are several explanations for this excess power that range from systematics to new physics. We test the survey, data, and excess power, as well as possible origins.

To paraphrase, it means that the distribution of galaxies in the survey they study is clumpier than expected on very large scales. In fact the level of fluctuation is about a factor two higher than expected on the basis of the standard cosmological model. This shows that either there’s something wrong with the standard cosmological model or there’s something wrong with the survey. Being a skeptic at heart, I’d bet on the latter if I had to put my money somewhere, because this survey involves photometric determinations of redshifts rather than the more accurate and reliable spectroscopic variety. I won’t be getting too excited about this result unless and until it is confirmed with a full spectroscopic survey. But that’s not to say it isn’t an interesting result.

For one thing it keeps alive a debate about whether, and at what scale, the Universe is homogeneous. The standard cosmological model is based on the Cosmological Principle, which asserts that the Universe is, in a broad-brush sense, homogeneous (is the same in every place) and isotropic (looks the same in all directions). But the question that has troubled cosmologists for many years is what is meant by large scales? How broad does the broad brush have to be?

At our meeting a few weeks ago, Subir Sarkar from Oxford pointed out that the evidence for cosmological homogeneity isn’t as compelling as most people assume. I blogged some time ago about an alternative idea, that the Universe might have structure on all scales, as would be the case if it were described in terms of a fractal set characterized by a fractal dimension D. In a fractal set, the mean number of neighbours of a given galaxy within a spherical volume of radius R is proportional to R^D. If galaxies are distributed uniformly (homogeneously) then D = 3, as the number of neighbours simply depends on the volume of the sphere, i.e. as R^3, and the average number-density of galaxies. A value of D < 3 indicates that the galaxies do not fill space in a homogeneous fashion: D = 1, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as R^1, not as its volume; galaxies distributed in sheets would have D=2, and so on.

The discussion of a fractal universe is one I’m overdue to return to. In my previous post  I left the story as it stood about 15 years ago, and there have been numerous developments since then. I will do a “Part 2” to that post before long, but I’m waiting for some results I’ve heard about informally, but which aren’t yet published, before filling in the more recent developments.

We know that D \simeq 1.2 on small scales (in cosmological terms, still several Megaparsecs), but the evidence for a turnover to D=3 is not so strong. The point is, however, at what scale would we say that homogeneity is reached. Not when D=3 exactly, because there will always be statistical fluctuations; see below. What scale, then?  Where D=2.9? D=2.99?

What I’m trying to say is that much of the discussion of this issue involves the phrase “scale of homogeneity” when that is a poorly defined concept. There is no such thing as “the scale of homogeneity”, just a whole host of quantities that vary with scale in a way that may or may not approach the value expected in a homogeneous universe.

It’s even more complicated than that, actually. When we cosmologists adopt the Cosmological Principle we apply it not to the distribution of galaxies in space, but to space itself. We assume that space is homogeneous so that its geometry can be described by the Friedmann-Lemaitre-Robertson-Walker metric.

According to Einstein’s  theory of general relativity, clumps in the matter distribution would cause distortions in the metric which are roughly related to fluctuations in the Newtonian gravitational potential \delta\Phi by \delta\Phi/c^2 \sim \left(\lambda/ct \right)^{2} \left(\delta \rho/\rho\right), give or take a factor of a few, so that a large fluctuation in the density of matter wouldn’t necessarily cause a large fluctuation of the metric unless it were on a scale \lambda reasonably large relative to the cosmological horizon \sim ct. Galaxies correspond to a large \delta \rho/\rho \sim 10^6 but don’t violate the Cosmological Principle because they are too small to perturb the background metric significantly. Even the big clumps found by the UCL team only correspond to a small variation in the metric. The issue with these, therefore, is not so much that they threaten the applicability of the Cosmological Principle, but that they seem to suggest structure might have grown in a different way to that usually supposed.

The problem is that we can’t measure the gravitational potential on these scales directly so our tests are indirect. Counting galaxies is relatively crude because we don’t even know how well galaxies trace the underlying mass distribution.

An alternative way of doing this is to use not the positions of galaxies, but their velocities (usually called peculiar motions). These deviations from a pure Hubble flow are caused by lumps of matter pulling on the galaxies; the more lumpy the Universe is, the larger the velocities are and the larger the lumps are the more coherent the flow becomes. On small scales galaxies whizz around at speeds of hundreds of kilometres per second relative to each other, but averaged over larger and larger volumes the bulk flow should get smaller and smaller, eventually coming to zero in a frame in which the Universe is exactly homogeneous and isotropic.

Roughly speaking the bulk flow v should relate to the metric fluctuation as approximately \delta \Phi/c^2 \sim \left(\lambda/ct \right) \left(v/c\right).

It has been claimed that some observations suggest the existence of a dark flow which, if true, would challenge the reliability of the standard cosmological framework, but these results are controversial and are yet to be independently confirmed.

But suppose you could measure the net flow of matter in spheres of increasing size. At what scale would you claim homogeneity is reached? Not when the flow is exactly zero, as there will always be fluctuations, but exactly how small?

The same goes for all the other possible criteria we have for judging cosmological homogeneity. We are free to choose the point where we say the level of inhomogeneity is sufficiently small to be satisfactory.

In fact, the standard cosmology (or at least the simplest version of it) has the peculiar property that it doesn’t ever reach homogeneity anyway! If the spectrum of primordial perturbations is scale-free, as is usually supposed, then the metric fluctuations don’t vary with scale at all. In fact, they’re fixed at a level of \delta \Phi/c^2 \sim 10^{-5}.

The fluctuations are small, so the FLRW metric is pretty accurate, but don’t get smaller with increasing scale, so there is no point when it’s exactly true. So lets have no more of “the scale of homogeneity” as if that were a meaningful phrase. Let’s keep the discussion to the behaviour of suitably defined measurable quantities and how they vary with scale. You know, like real scientists do.