Archive for Large-scale Structure

Zel’dovich Pancake Day!

Posted in The Universe and Stuff with tags , , , on February 16, 2021 by telescoper

Today it’s Shrove Tuesday but unfortunately I forgot to buy shroves yesterday so will have to make do with pancakes instead, but not the usual kind. I’ve blogged before about the Zel’dovich Approximation (published in Zeldovich, Ya.B. 1970, A&A, 5, 84) but there’s no harm in describing this classic again. Here’s the first page of the original paper:

zeld

In a nutshell, this daringly simple approximation considers the evolution of particles in an expanding Universe from an early near-uniform state into the non-linear regime as a sort of ballistic, or kinematic, process. Imagine the matter particles are initial placed on a uniform grid, where they are labelled by Lagrangian coordinates vec{q}. Their (Eulerian) positions at some later time t are taken to be

vec{r}(vec(q),t) = a(t) vec{x}(vec{q},t) = a(t) left[ vec{q} + b(t) vec{s}(vec{q},t) right].

Here the vec{x} coordinates are comoving, i.e. scaled with the expansion of the Universe using the scale factor a(t). The displacement vec{s}(vec{q},t) between initial and final positions in comoving coordinates is taken to have the form

vec{s}(vec{q},t)= vec{nabla} Phi_0 (vec{q})

where Phi_0 is a kind of velocity potential (which is also in linear Newtonian theory proportional to the gravitational potential).If we’ve got the theory right then the gravitational potential field defined over the initial positions is a Gaussian random field. The function b(t) is the growing mode of density perturbations in the linear theory of gravitational instability.

This all means that the particles just get a small initial kick from the uniform Lagrangian grid and their subsequent motion carries on in the same direction. The approximation predicts the formation of caustics in the final density field when particles from two or more different initial locations arrive at the same final location, a condition known as shell-crossing. The caustics are identified with the main elements we find in large-scale structure. Because the initial collapse is usually along one direction the dominant structures are known as pancakes (or, as Zel’dovich himself might have called them, blini…).

Here’s a picture of a simulation showing these structures from the classic paper of Davis, Efstathiou, Frenk & White (1985):

Despite its simplicity this approximation is known to perform extremely well at reproducing the morphology of the cosmic web, although it breaks down after shell-crossing has occurred. In reality, bound structures are formed whereas the Zel’dovich approximation simply predicts that particles sail straight through the caustic which consequently evaporates.

Early Dark Energy and Cosmic Tension

Posted in The Universe and Stuff with tags , , , , , on March 19, 2020 by telescoper

To avoid talking any more about you-know-what I thought I would continue the ongoing Hubble constant theme. Rhere is an interesting new paper on the arXiv (by Hill et al.) about the extent to which a modified form of dark energy might relieve the current apparent tension.

The abstract is:

 

You can click on this to make it bigger; you can also download the PDF here.

I think the conclusion is clear and it may or may not be related to a previous post of mine here about the implications of Etherington’s theorem.

Here’s my ongoing poll on the Hubble constant poll. Feel free to while away a few seconds of your time working from home casting a vote!

 

 

The Zel’dovich Lens

Posted in The Universe and Stuff with tags , , , , on June 30, 2014 by telescoper

Back to the grind after an enjoyable week in Estonia I find myself with little time to blog, so here’s a cute graphic by way of  a postscript to the IAU Symposium on The Zel’dovich Universe. I’ve heard many times about this way of visualizing the Zel’dovich Approximation (published in Zeldovich, Ya.B. 1970, A&A, 5, 84) but this is by far the best graphical realization I have seen. Here’s the first page of the original paper:

zeld

In a nutshell, this daringly simple approximation considers the evolution of particles in an expanding Universe from an early near-uniform state into the non-linear regime as a sort of ballistic, or kinematic, process. Imagine the matter particles are initial placed on a uniform grid, where they are labelled by Lagrangian coordinates \vec{q}. Their (Eulerian) positions at some later time t are taken to be

\vec{r}(\vec(q),t) = a(t) \vec{x}(\vec{q},t) = a(t) \left[ \vec{q} + b(t) \vec{s}(\vec{q},t) \right].

Here the \vec{x} coordinates are comoving, i.e. scaled with the expansion of the Universe using the scale factor a(t). The displacement \vec{s}(\vec{q},t) between initial and final positions in comoving coordinates is taken to have the form

\vec{s}(\vec{q},t)= \vec{\nabla} \Phi_0 (\vec{q})

where \Phi_0 is a kind of velocity potential (which is also in linear Newtonian theory proportional to the gravitational potential).If we’ve got the theory right then the gravitational potential field defined over the initial positions is a Gaussian random field. The function b(t) is the growing mode of density perturbations in the linear theory of gravitational instability.

This all means that the particles just get a small initial kick from the uniform Lagrangian grid and their subsequent motion carries on in the same direction. The approximation predicts the formation of caustics  in the final density field when particles from two or more different initial locations arrive at the same final location, a condition known as shell-crossing. The caustics are identified with the walls and filaments we find in large-scale structure.

Despite its simplicity this approximation is known to perform extremely well at reproducing the morphology of the cosmic web, although it breaks down after shell-crossing has occurred. In reality, bound structures are formed whereas the Zel’dovich approximation simply predicts that particles sail straight through the caustic which consequently evaporates.

Anyway the mapping described above can also be given an interpretation in terms of optics. Imagine a uniform illumination field (the initial particle distribution) incident upon a non-uniform surface (e.g. the surface of the water in a swimming pool). Time evolution is represented by greater depths within the pool.  The light pattern observed on the bottom of the pool (the final distribution) displays caustics with a very similar morphology to the Cosmic Web, except in two dimensions, obviously.

Here is a very short  but very nice video by Johan Hidding showing how this works:

In this context, the Zel’dovich approximation corresponds to the limit of geometrical optics. More accurate approximations can presumably be developed using analogies with physical optics, but this programme has only just begun.

The Power Spectrum and the Cosmic Web

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , on June 24, 2014 by telescoper

One of the things that makes this conference different from most cosmology meetings is that it is focussing on the large-scale structure of the Universe in itself as a topic rather a source of statistical information about, e.g. cosmological parameters. This means that we’ve been hearing about a set of statistical methods that is somewhat different from those usually used in the field (which are primarily based on second-order quantities).

One of the challenges cosmologists face is how to quantify the patterns we see in galaxy redshift surveys. In the relatively recent past the small size of the available data sets meant that only relatively crude descriptors could be used; anything sophisticated would be rendered useless by noise. For that reason, statistical analysis of galaxy clustering tended to be limited to the measurement of autocorrelation functions, usually constructed in Fourier space in the form of power spectra; you can find a nice review here.

Because it is so robust and contains a great deal of important information, the power spectrum has become ubiquitous in cosmology. But I think it’s important to realise its limitations.

Take a look at these two N-body computer simulations of large-scale structure:

The one on the left is a proper simulation of the “cosmic web” which is at least qualitatively realistic, in that in contains filaments, clusters and voids pretty much like what is observed in galaxy surveys.

To make the picture on the right I first  took the Fourier transform of the original  simulation. This approach follows the best advice I ever got from my thesis supervisor: “if you can’t think of anything else to do, try Fourier-transforming everything.”

Anyway each Fourier mode is complex and can therefore be characterized by an amplitude and a phase (the modulus and argument of the complex quantity). What I did next was to randomly reshuffle all the phases while leaving the amplitudes alone. I then performed the inverse Fourier transform to construct the image shown on the right.

What this procedure does is to produce a new image which has exactly the same power spectrum as the first. You might be surprised by how little the pattern on the right resembles that on the left, given that they share this property; the distribution on the right is much fuzzier. In fact, the sharply delineated features  are produced by mode-mode correlations and are therefore not well described by the power spectrum, which involves only the amplitude of each separate mode. In effect, the power spectrum is insensitive to the part of the Fourier description of the pattern that is responsible for delineating the cosmic web.

If you’re confused by this, consider the Fourier transforms of (a) white noise and (b) a Dirac delta-function. Both produce flat power-spectra, but they look very different in real space because in (b) all the Fourier modes are correlated in such away that they are in phase at the one location where the pattern is not zero; everywhere else they interfere destructively. In (a) the phases are distributed randomly.

The moral of this is that there is much more to the pattern of galaxy clustering than meets the power spectrum…

Illustris, Cosmology, and Simulation…

Posted in The Universe and Stuff with tags , , , , , , on May 8, 2014 by telescoper

There’s been quite a lot of news coverage over the last day or two emanating from a paper just out in the journal Nature by Vogelsberger et al. which describes a set of cosmological simulations called Illustris; see for example here and here.

The excitement revolves around the fact that Illustris represents a bit of a landmark, in that it’s the first hydrodynamical simulation with sufficient dynamical range that it is able to fully resolve the formation and evolution of  individual galaxies within the cosmic web of large-scale structure.

The simulations obviously represent a tremendous piece or work; they were run on supercomputers in France, Germany, and the USA; the largest of them was run on no less than 8,192 computer cores and took 19 million CPU hours. A single state-of-the-art desktop computer would require more than 2000 years to perform this calculation!

There’s even a video to accompany it (shame about the music):

The use of the word “simulation” always makes me smile. Being a crossword nut I spend far too much time looking in dictionaries but one often finds quite amusing things there. This is how the Oxford English Dictionary defines SIMULATION:

1.

a. The action or practice of simulating, with intent to deceive; false pretence, deceitful profession.

b. Tendency to assume a form resembling that of something else; unconscious imitation.

2. A false assumption or display, a surface resemblance or imitation, of something.

3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.

So it’s only the third entry that gives the meaning intended to be conveyed by the usage in the context of cosmological simulations. This is worth bearing in mind if you prefer old-fashioned analytical theory and want to wind up a simulationist! In football, of course, you can even get sent off for simulation…

Reproducing a reasonable likeness of something in a computer is not the same as understanding it, but that is not to say that these simulations aren’t incredibly useful and powerful, not just for making lovely pictures and videos but for helping to plan large scale survey programmes that can go and map cosmological structures on the same scale. Simulations of this scale are needed to help design observational and data analysis strategies for, e.g., the  forthcoming Euclid mission.

Fly through of the GAMA Galaxy Catalogue

Posted in The Universe and Stuff with tags , , , , , , , , on March 13, 2014 by telescoper

When I’m struggling to find time to do a proper blog post I’m always grateful that I work in cosmology because nearly every day there’s something interest to post. I’m indebted to Andy Lawrence for bring the following wonderful video to my attention. It comes from the Galaxy And Mass Assembly Survey (or GAMA Survey for short), a spectroscopic survey of around 300,000 galaxies in a region of the sky comprising about 300 square degrees;  the measured redshifts of the galaxies enable their three-dimensional positions to be plotted. The video shows the shape of the survey volume before showing what the distribution of galaxies in space looks like as you fly through. Note that the galaxy distances are to scale, but the image of each galaxy is magnified to make it easier to see; the real Universe is quite a lot emptier than this in that the separation between galaxies is larger relative to their size.

One Hundred Years of Zel’dovich

Posted in The Universe and Stuff with tags , , , , on March 12, 2014 by telescoper

Lovely weather today, but it’s also been an extremely busy day with meetings and teachings. I did realize yesterday however that I had forgotten to mark a very important centenary at the weekend. If I hadn’t been such a slacker that I took last Saturday off work I would probably have been reminded…

zeldovichThe great Russian physicist Yakov Borisovich Zel’dovich (left) was born on March 8th 1914, so had he lived he would have been 100 years old last Saturday. To us cosmologists Zel’dovich  is best known for his work on the large-scale structure of the Universe, but he only started to work on that subject relatively late in his career during the 1960s.  He in fact began his life in research as a physical chemist and arguably his greatest contribution to science was that he developed the first completely physically based theory of flame propagation (together with Frank-Kamenetskii). No doubt he also used insights gained from this work, together with his studies of detonation and shock waves, in the Soviet nuclear bomb programme in which he was a central figure, and which no doubt led to the chestful of medals he’s wearing in the photograph.

My own connection with Zel’dovich is primarily through his scientific descendants, principally his former student Sergei Shandarin, who has a faculty position at the University of Kansas. For example, I visited Kansas back in 1992 and worked on a project with Sergei and Adrian Melott which led to a paper published in 1993, the abstract of which makes it clear the debt it owed to the work of Ze’dovich.

The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel’dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is ‘enhanced’ by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel’dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.

The Zel’dovich Approximation referred to in this abstract is based on an extremely simple idea but which, as we showed in the above paper, turns out to be extremely accurate at reproducing the morphology of the “cosmic web” of large-scale structure.

Zel’dovich passed away in 1987. I was a graduate student at that time and had never had the opportunity to meet him. If I had done so I’m sure I would have found him fascinating and intimidating in equal measure, as I admired his work enormously as did everyone I knew in the field of cosmology.  Anyway, a couple of years after his death a review paper written by himself and Sergei Shandarin was published, along with the note:

The Russian version of this review was finished in the summer of 1987. By the tragic death of Ya. B.Zeldovich on December 2, 1987, about four-fifths of the paper had been translated into English. Professor Zeldovich would have been 75 years old on March 8, 1989 and was vivid and creative until his last day. The theory of the structure of the universe was one of his favorite subjects, to which he made many note-worthy contributions over the last 20 years.

As one does if one is vain I looked down the reference list to see if any of my papers were cited. I’d only published one paper before Zel’dovich died so my hopes weren’t high. As it happens, though, my very first paper (Coles 1986) was there in the list. That’s still the proudest moment of my life!

reference

Anyway, this post gives me the opportunity to advertise that there is a special meeting called The Zel’dovich Universe coming up this summer in Tallinn, Estonia. It looks a really interesting conference and I really hope I can find the time to fit it into my schedule. I’ve never been to Estonia…

Power versus Pattern

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on June 15, 2012 by telescoper

One of the challenges we cosmologists face is how to quantify the patterns we see in galaxy redshift surveys. In the relatively recent past the small size of the available data sets meant that only relatively crude descriptors could be used; anything sophisticated would be rendered useless by noise. For that reason, statistical analysis of galaxy clustering tended to be limited to the measurement of autocorrelation functions, usually constructed in Fourier space in the form of power spectra; you can find a nice review here.

Because it is so robust and contains a great deal of important information, the power spectrum has become ubiquitous in cosmology. But I think it’s important to realise its limitations.

Take a look at these two N-body computer simulations of large-scale structure:

The one on the left is a proper simulation of the “cosmic web” which is at least qualitatively realistic, in that in contains filaments, clusters and voids pretty much like what is observed in galaxy surveys.

To make the picture on the right I first  took the Fourier transform of the original  simulation. This approach follows the best advice I ever got from my thesis supervisor: “if you can’t think of anything else to do, try Fourier-transforming everything.”

Anyway each Fourier mode is complex and can therefore be characterized by an amplitude and a phase (the modulus and argument of the complex quantity). What I did next was to randomly reshuffle all the phases while leaving the amplitudes alone. I then performed the inverse Fourier transform to construct the image shown on the right.

What this procedure does is to produce a new image which has exactly the same power spectrum as the first. You might be surprised by how little the pattern on the right resembles that on the left, given that they share this property; the distribution on the right is much fuzzier. In fact, the sharply delineated features  are produced by mode-mode correlations and are therefore not well described by the power spectrum, which involves only the amplitude of each separate mode.

If you’re confused by this, consider the Fourier transforms of (a) white noise and (b) a Dirac delta-function. Both produce flat power-spectra, but they look very different in real space because in (b) all the Fourier modes are correlated in such away that they are in phase at the one location where the pattern is not zero; everywhere else they interfere destructively. In (a) the phases are distributed randomly.

The moral of this is that there is much more to the pattern of galaxy clustering than meets the power spectrum…

The Fractal Universe, Part 1

Posted in The Universe and Stuff with tags , , , , on August 4, 2010 by telescoper

A long time ago I blogged about the Cosmic Web and one of the comments there suggested I write something about the idea that the large-scale structure of the Universe might be some sort of fractal.  There’s a small (but vocal) group of cosmologists who favour fractal cosmological models over the more orthodox cosmology favoured by the majority, so it’s definitely something worth writing about. I have been meaning to post something about it for some time now, but it’s too big and technical a matter to cover in one item. I’ve therefore decided to start by posting a slightly edited version of a short News and Views piece I wrote about the  question in 1998. It’s very out of date on the observational side, but I thought it would be good to set the scene for later developments (mentioned in the last paragraph), which I hope to cover in future posts.

—0—

One of the central tenets of cosmological orthodoxy is the Cosmological Principle, which states that, in a broad-brush sense, the Universe is the same in every place and in every direction. This assumption has enabled cosmologists to obtain relatively simple solutions of Einstein’s General Theory of Relativity that describe the dynamical behaviour of the Universe as a whole. These solutions, called the Friedmann models [1], form the basis of the Big Bang theory. But is the Cosmological Principle true? Not according to Francesco Sylos-Labini et al. [2], who argue, controversially, that the Universe is not uniform at all, but has a never-ending hierarchical structure in which galaxies group together in clusters which, in turn, group together in superclusters, and so on.

These claims are completely at odds with the Cosmological Principle and therefore with the Friedmann models and the entire Big Bang theory. The central thrust of the work of Sylos-Labini et al. is that the statistical methods used by cosmologists to analyse galaxy clustering data are inappropriate because they assume the property of large-scale homogeneity at the outset. If one does not wish to assume this then one must use different methods.

What they do is to assume that the Universe is better described in terms of a fractal set characterized by a fractal dimension D. In a fractal set, the mean number of neighbours of a given galaxy within a volume of radius R is proportional to RD. If galaxies are distributed uniformly then D = 3, as the number of neighbours simply depends on the volume of the sphere, i.e. as R3 and the average number-density of galaxies. A value of D < 3 indicates that the galaxies do not fill space in a homogeneous fashion: D = 1, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as R1, not as its volume.  Sylos-Labini et al. argue that D = 2, which suggests a roughly planar (sheet-like) distribution of galaxies.

Most cosmologists would accept that the distribution of galaxies on relatively small scales, up to perhaps a few tens of megaparsecs (Mpc), can indeed be described in terms of a fractal model.This small-scale clustering is expected to be dominated by purely gravitational physics, and gravity has no particular length scale associated with it. But standard theory requires that the fractal dimension should approach the homogeneous value D = 3 on large enough scales. According to standard models of cosmological structure formation, this transition should occur on scales of a few hundred Mpc.

The main source of the controversy is that most available three-dimensional maps of galaxy positions are not large enough to encompass the expected transition to homogeneity. Distances must be inferred from redshifts, and it is difficult to construct these maps from redshift surveys, which require spectroscopic studies of large numbers of galaxies.

Sylos-Labini et al. have analysed a number of redshift surveys, including the largest so far available, the Las Campanas Redshift Survey [3]; see below. They find D = 2 for all the data they look at, and argue that there is no transition to homogeneity for scales up to 4,000 Mpc, way beyond the expected turnover. If this were true, it would indeed be bad news for the orthodox among us.

The survey maps the Universe out to recession velocities of 60,000 km s-1, corresponding to distances of a few hundred million parsecs. Although no fractal structure on the largest scales is apparent (there are no clear voids or concentrations on the same scale as the whole map), one statistical analysis [2] finds a fractal dimension of two in this and other surveys, for all scales – conflicting with a basic principle of cosmology.

Their results are, however, at variance with the visual appearance of the Las Campanas survey, for example, which certainly seems to display large-scale homogeneity. Objections to these claims have been lodged by Luigi Guzzo [4], for instance, who has criticized their handling of the data and has presented independent results that appear to be consistent with a transition to homogeneity. It is also true that Sylos-Labini et al. have done their cause no good by basing some conclusions on a heterogeneous compilation of redshifts called the LEDA database [5], which is not a controlled sample and so is completely unsuitable for this kind of study. Finally, it seems clear that they have substantially overestimated the effective depth of the catalogues they are using. But although their claims remain controversial, the consistency of the results obtained by Sylos-Labini et al. is impressive enough to raise doubts about the standard picture.

Mainstream cosmologists are not yet so worried as to abandon the Cosmological Principle. Most are probably quite happy to admit that there is no overwhelming direct evidence in favour of global uniformity from current three-dimensional galaxy catalogues, which are in any case relatively shallow. But this does not mean there is no evidence at all: the near-isotropy of the sky temperature of the cosmic microwave background, the uniformity of the cosmic X-ray background, and the properties of source counts are all difficult to explain unless the Universe is homogeneous on large scales [6]. Moreover, Hubble’s law itself is a consequence of large-scale homogeneity: if the Universe were inhomogeneous one would not expect to see a uniform expansion, but an irregular pattern of velocities resulting from large-scale density fluctuations.

But above all, it is the principle of Occam’s razor that guides us: in the absence of clear evidence against it, the simplest model compatible with the data is to be preferred. Several observational projects are already under way, including the Sloan Digital Sky Survey and the Anglo-Australian 2DF Galaxy Redshift Survey, that should chart the spatial distribution of galaxies in enough detail to provide an unambiguous answer to the question of large-scale cosmic uniformity. In the meantime, and in the absence of clear evidence against it, the Cosmological Principle remains an essential part of the Big Bang theory.

References

  1. Friedmann, A. Z. Phys. 10, 377–386 ( 1922).
  2. Sylos-Labini, F., Montuori, M. & Pietronero, L. Phys. Rep. 293, 61-226 .
  3. Shectman, S.et al. Astrophys. J. 470, 172–188 (1996).
  4. Guzzo, L. New Astron. 2, 517–532 ( 1997).
  5. Paturel, G. et al. in Information and Online Data in Astronomy (eds Egret, D. & Albrecht, M.) 115 (Kluwer, Dordrecht,1995).
  6. Peebles, P. J. E. Principles of Physical Cosmology (Princeton Univ. Press, NJ, 1993).