Simulations and False Assumptions

Just time for an afternoon quickie!

I saw this abstract by Smith et al. on the arXiv today:

Future large-scale structure surveys of the Universe will aim to constrain the cosmological model and the true nature of dark energy with unprecedented accuracy. In order for these surveys to achieve their designed goals, they will require predictions for the nonlinear matter power spectrum to sub-percent accuracy. Through the use of a large ensemble of cosmological N-body simulations, we demonstrate that if we do not understand the uncertainties associated with simulating structure formation, i.e. knowledge of the `true’ simulation parameters, and simply seek to marginalize over them, then the constraining power of such future surveys can be significantly reduced. However, for the parameters {n_s, h, Om_b, Om_m}, this effect can be largely mitigated by adding the information from a CMB experiment, like Planck. In contrast, for the amplitude of fluctuations sigma8 and the time-evolving equation of state of dark energy {w_0, w_a}, the mitigation is mild. On marginalizing over the simulation parameters, we find that the dark-energy figure of merit can be degraded by ~2. This is likely an optimistic assessment, since we do not take into account other important simulation parameters. A caveat is our assumption that the Hessian of the likelihood function does not vary significantly when moving from our adopted to the ‘true’ simulation parameter set. This paper therefore provides strong motivation for rigorous convergence testing of N-body codes to meet the future challenges of precision cosmology.

This paper asks an important question which I could paraphrase as “Do we trust N-body simulations too much?”.  The use of numerical codes in cosmology is widespread and there’s no question that they have driven the subject forward in many ways, not least because they can generate “mock” galaxy catalogues in order to help plan survey strategies. However, I’ve always worried that there is a tendency to trust these calculations too much. On the one hand there’s the question of small-scale resolution and on the other there’s the finite size of the computational volume. And there are other complications in between too. In other words, simulations are approximate. To some extent our ability to extract information from surveys will therefore be limited by the inaccuracy of our calculation of  the theoretical predictions.

Anyway,  the paper gives us quite a few things to think about and I think it might provoke a bit of discussion, which is why I mentioned it here – i.e. to encourage folk to read and give their opinions.

The use of the word “simulation” always makes me smile. Being a crossword nut I spend far too much time looking in dictionaries but one often finds quite amusing things there. This is how the Oxford English Dictionary defines SIMULATION:

1.

a. The action or practice of simulating, with intent to deceive; false pretence, deceitful profession.

b. Tendency to assume a form resembling that of something else; unconscious imitation.

2. A false assumption or display, a surface resemblance or imitation, of something.

3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.

So it’s only the third entry that gives the intended meaning. This is worth bearing in mind if you prefer old-fashioned analytical theory!

In football, of course, you can even get sent off for simulation…

About these ads

3 Responses to “Simulations and False Assumptions”

  1. “To some extent our ability to extract information from surveys will therefore be limited by the inaccuracy of our calculation of the theoretical predictions.”

    I was surprised to hear that this is also the reason for the uncertainty in g-2, i.e. the uncertainty in the agreement between theory and observation comes neither from observational errors nor from approximations in QED but rather from uncertainties in the calculations. Of course, this uncertainty is much smaller than even “precision cosmology”.

    • Anton Garrett Says:

      And that is a REAL testament to experimentalists.

      • Indeed. Apparently there is a Japanese physicist who does these calculations. One of my professors told me that he had recently met him at a conference and that he looked rather glum, so he asked what the problem was. The “problem” was that experimentalists had published a better measurement of g-2, which meant that he had several weeks of calculations necessary in order for theory to catch up. :-(

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 3,701 other followers

%d bloggers like this: