The Power Spectrum and the Cosmic Web

One of the things that makes this conference different from most cosmology meetings is that it is focussing on the large-scale structure of the Universe in itself as a topic rather a source of statistical information about, e.g. cosmological parameters. This means that we’ve been hearing about a set of statistical methods that is somewhat different from those usually used in the field (which are primarily based on second-order quantities).

One of the challenges cosmologists face is how to quantify the patterns we see in galaxy redshift surveys. In the relatively recent past the small size of the available data sets meant that only relatively crude descriptors could be used; anything sophisticated would be rendered useless by noise. For that reason, statistical analysis of galaxy clustering tended to be limited to the measurement of autocorrelation functions, usually constructed in Fourier space in the form of power spectra; you can find a nice review here.

Because it is so robust and contains a great deal of important information, the power spectrum has become ubiquitous in cosmology. But I think it’s important to realise its limitations.

Take a look at these two N-body computer simulations of large-scale structure:

The one on the left is a proper simulation of the “cosmic web” which is at least qualitatively realistic, in that in contains filaments, clusters and voids pretty much like what is observed in galaxy surveys.

To make the picture on the right I first  took the Fourier transform of the original  simulation. This approach follows the best advice I ever got from my thesis supervisor: “if you can’t think of anything else to do, try Fourier-transforming everything.”

Anyway each Fourier mode is complex and can therefore be characterized by an amplitude and a phase (the modulus and argument of the complex quantity). What I did next was to randomly reshuffle all the phases while leaving the amplitudes alone. I then performed the inverse Fourier transform to construct the image shown on the right.

What this procedure does is to produce a new image which has exactly the same power spectrum as the first. You might be surprised by how little the pattern on the right resembles that on the left, given that they share this property; the distribution on the right is much fuzzier. In fact, the sharply delineated features  are produced by mode-mode correlations and are therefore not well described by the power spectrum, which involves only the amplitude of each separate mode. In effect, the power spectrum is insensitive to the part of the Fourier description of the pattern that is responsible for delineating the cosmic web.

If you’re confused by this, consider the Fourier transforms of (a) white noise and (b) a Dirac delta-function. Both produce flat power-spectra, but they look very different in real space because in (b) all the Fourier modes are correlated in such away that they are in phase at the one location where the pattern is not zero; everywhere else they interfere destructively. In (a) the phases are distributed randomly.

The moral of this is that there is much more to the pattern of galaxy clustering than meets the power spectrum…

32 Responses to “The Power Spectrum and the Cosmic Web”

  1. I wonder if the fractal properties of the cosmic web are being given due consideration.

    • What is “due consideration”? Check to see if the universe is self-similar, as opposed to homogeneous, on cosmologically relevant scales? Been there, done that; it’s not. (The same two blocks making the same claim for 30 years, despite the huge advances in observations since then, does not make it so either.)

      Don’t confuse a Charlier-style fractal universe with the fractal dimension of large-scale structure.

      • blocks —> blokes

      • About 100 years ago cosmologists insisted that the cosmos was a homogeneous distribution of stars and there was no further extension of the cosmological hierarchy. That was just before they became aware of vastly larger scale galaxies, galactic clusters, superclusters and assorted galactic phenomena.

        Love those cosmologists: often wrong but never in doubt.

        Once again, our current observational limits are probably a very poor approximation of the limits (if any!) of nature’s hierarchy.

      • telescoper Says:

        Time (and much more data) will tell. John Peacock’s talk today pointed out that there’s a kind of Moore’s Law for mapping the Universe: since Slipher’s first extragalactic measurement in 1912 the number of redshifts has doubled every 2.6 years. We now have enough data to test many independent theoretical ideas, and although there remain open questions there’s no question that we now know an awful lot more than we did 100 years ago.

      • True, but the past is not always a guide to the future. Despite James Burke, change is not a goal in itself. Yes, a long time ago people didn’t realize the extent of the world, but now the entire Earth is mapped and you can look at any piece of it on your smartphone whenever you want.

        Science converges on the truth. Absolute truth might never be reached, but that doesn’t mean that we are just as wrong as the ancient Egyptians were.

        Very probably, our visible universe is a small part of our universe (not to mention the multiverse(s)). Perhaps, very far away, it is quite different. What has changed in the last few decades is that we have now mapped orders of magnitude more of the volume of the universe, and it still is homogeneous on large scales. It would be a strange coincidence if this broke down just beyond the horizon.

        And there is no observational evidence for a fractal structure on large scales, and no serious theoretical reason to expect one.

      • Well, you can call a vast cosmic web of filaments and voids (which have recently been found to contain mini-filaments, by the way, i.e, yet more fractal self-similarity!) “homogeneous” but only in a very crude statistical sense. If your milk came full of such filaments (bacteria) would you call it “homogeneous?

        If/when our observational capacities allow us to observe or scientifically infer scales that are way beyond our current limits, we may see whole new scales of nature’s hierarchy that we are currently oblivious of. Just the way we went from thinking stars were to the upper cutoff to realizing that the hierarchy went WAY beyond that.

        Yes we have progressed a lot in the last 100 years, but if you think we now know all there is to know (with just a few loose ends to tie up), then you are making a classic blunder repeated throughout history by those who do not learn from history.

      • telescoper Says:

        I never said that and don’t think that. I think we might answer the questions we’re currently asking pretty soon, but those questions will then be replaced by other, deeper ones. I remain convinced, for example, that we’re asking the wrong question about Dark Energy.

      • telescoper Says:

        Every statement about homogeneity involves some sort of coarse-graining. Milk is not homogeneous on the scale of its constituent molecules. The issue is about the size of the smoothing scale needed.

        To confuse matters further, When cosmologists talk about “homogeneity” they often mean “statistical homogeneity”, meaning that the statistical properties don’t depend on position, like a time series consisting of stationary Gaussian noise.

        Curdled milk is not strictly homogeneous but does possess statistical homogeneity (ignoring boundary effects).

      • “If your milk came full of such filaments (bacteria) would you call it “homogeneous?”

        It is homogeneous in the sense that is necessary for the universe to be described as a Friedmann-Lemaitre universe, i.e. the average density is the same everywhere. It doesn’t matter what the small-scale (compared to, say, the Hubble length) is. This does not imply that the large-scale structure of the universe is fractal.

        Yes, some people do make mistakes (“guitar bands are out” said the A&R man as he rejected the Beatles), but that does not mean that everything is provisional and subject to change as much in the future as it did in the past.

        Forget Thomas Kuhn.

      • My earlier comments were in reply to Mr. Helbig, not Telescoper.

        Regarding ‘forget Kuhn’, I repeat: PH wonderfully demonstrates the veracity of Landau’s dictum: often wrong, but never in doubt.

      • I have no lack of doubt. I doubt that the universe is fractal on large scales. I doubt that most planets are captured, rather than being formed with their star. I doubt that dark matter is primarily black holes. I doubt that the electron has substructure.

        I have no lack of doubt.

      • Anton Garrett Says:

        What does “fractal on large scales mean”?

      • Sigh. Wrong again Helbig. You confuse the words “doubt” and “certainty”.

        You, sir, are certain that these specific predictions are wrong. I, on the other hand, doubt that you can justify such religious certainty with scientific evidence.

        You are selling absolute certainty. I am advocating open-minded doubt about the standard models of cosmology and particle physics, i.e., the current balkanized and increasingly problematic paradigm.

        Let others judge whose attitude is more scientific.

      • What does “fractal on large scales mean”?

        It means a Charlier-style universe which is self-similar at all scales. It doesn’t seem to be supported by observations.

        A universe might be fractal at some scales, i.e. exhibit self-similarity over a relatively large range, but still “turn over” to homogeneity at large scales. The large-scale structure of the universe is self-similar over a range of scales, but homogeneous at large scales.

      • Unless, obviously, there is another turnover back to inhomogeneity like we observe over and over again at small scales.

        There are many different types of fractal and hierarchical cosmological models.

        Helbig is making the same unimaginative mistake that cosmologists made before the discovery of galactic scale phenomena. They assumed the approximate statistical “homogeneity” on stellar scales would continue indefinitely to higher scales.

        Such absolute certainty is a form of ignorance – ignorance of other equally valid and highly natural ideas.

  2. Anton Garrett Says:

    “we’ve been hearing about a set of statistical methods that is somewhat different from those usually used in the field (which are primarily second-order quantities).”

    A method is not a quantity, and 2nd order wrt what? (I’m seeking enlightenment in asking that, not trying to pick holes.)

    • telescoper Says:

      Quite so. I was writing this during one of the conference sessions and wasn’t paying as much attention as I should have been to what I was writing. I should have had a “based on” in there, which I have now added.

      I mean “Second-order” in the sense of moments of a distribution: the mean being first order and the variance second order. In the case of a spatially fluctuating field, the concept of second-order generalises to the two-point autocovariance function (in real space), which depends on separation of the points, or the power spectrum (essentially the variance of the amplitudes) which is a function of wavenumber. The autocovariance function and power spectrum are a Fourier transform pair, actually, so the contain precisely the same information; this is the Wiener-Khintchin theorem.

      • Anton Garrett Says:

        “Quite so. I was writing this during one of the conference sessions and wasn’t paying as much attention as I should have been to what I was writing.”

        Meaning you were paying attention to the conference session? Thanks for the clarification, anyway.

      • telescoper Says:

        Perhaps I wasn’t paying attention to either!

      • Anton Garrett Says:

        Well if you were following sport instead then there was plenty going on. Having let SriLanka wipe out a 3-figure deficit on 1st innings at Headingley, and then get away to set us 300+ (partly because dear old Jimmy is now almost past it and Broad was playing with an iffy knee), we crumbled but then almost saved it – until they went one better than we did at Lords and took our last wicket in the last over. And I’m sure you know that England had a worst-ever World Cup.

      • telescoper Says:

        I’d given up on the cricket after finding out the score at the end of Day 4. I was amazed to get back to my room just before 9pm to find England were still hanging on at nearly 7pm on Day 5. But my relief soon evaporated…

        Two great test matches though.

        My disappointment was compounded by England’s draw in the World Cup. Had they lost I would have netted £500 as I’d bet on them to lose all three games.

      • Anton Garrett Says:

        This would have been a better bet:

        http://www.bbc.co.uk/news/world-europe-28020605

      • telescoper Says:

        Yes, I saw that. What a thug Suarez is.

      • Anton Garrett Says:

        He’ll bite the dust soon enough.

  3. George Jones Says:

    “A method is not a quantity”

    A metonymic shift?🙂

  4. Your example is great. Where is it written up?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: