Archive for galaxy formation

Galaxy Formation in the EAGLE Project

Posted in The Universe and Stuff with tags , , , on December 8, 2016 by telescoper

Yesterday I went to a nice Colloquium by Rob Crain of Liverpool John Moores University (which is in the Midlands). Here’s the abstract of his talk which was entitled
Cosmological hydrodynamical simulations of the galaxy population:

I will briefly recap the motivation for, and progress towards, numerical modelling of the formation and evolution of the galaxy population – from cosmological initial conditions at early epochs through to the present day. I will introduce the EAGLE project, a flagship program of such simulations conducted by the Virgo Consortium. These simulations represent a major development in the discipline, since they are the first to broadly reproduce the key properties of the evolving galaxy population, and do so using energetically-feasible feedback mechanisms. I shall present a broad range of results from analyses of the EAGLE simulation, concerning the evolution of galaxy masses, their luminosities and colours, and their atomic and molecular gas content, to convey some of the strengths and limitations of the current generation of numerical models.

I added the link to the EAGLE project so you can find more information. As one of the oldies in the audience I can’t help remembering the old days of the galaxy formation simulation game. When I started my PhD back in 1985 the state of the art was a gravity-only simulation of 323 particles in a box. Nowadays one can manage about 20003 particles at the same time aas having a good go at dealing not only with gravity but also the complex hydrodynamical processes involved in assembling a galaxy of stars, gas, dust and dark matter from a set of primordial fluctuations present in the early Universe. In these modern simulations one does not just track the mass distribution but also various themrmodynamic properties such as temperature, pressure, internal energy and entropy, which means that they require large supercomputers. This certainly isn’t a solved problem – different groups get results that differ by an order of magnitude in some key predictions – but the game has certainly moved on dramatically in the past thirty years or so.

Another thing that has certainly improved a lot is data visualization: here is a video of one of the EAGLE simulations, showing a region of the Universe about 25 MegaParsecs across. The gas is colour-coded for temperature. As the simulation evolves you can see the gas first condense into the filaments of the Cosmic Web, thereafter forming denser knots in which stars form and become galaxies, experiencing in some cases explosive events which expel the gas. It’s quite a messy business, which is why one has to do these things numerically rather than analytically, but it’s certainly fun to watch!

Advertisements

That Big Black Hole Story

Posted in The Universe and Stuff with tags , , , , , , , , on February 28, 2015 by telescoper

There’s been a lot of news coverage this week about a very big black hole, so I thought I’d post a little bit of background.  The paper describing the discovery of the object concerned appeared in Nature this week, but basically it’s a quasar at a redshift z=6.30. That’s not the record for such an object. Not long ago I posted an item about the discovery of a quasar at redshift 7.085, for example. But what’s interesting about this beastie is that it’s a very big beastie, with a central black hole estimated to have a mass of around 12 billion times the mass of the Sun, which is a factor of ten or more larger than other objects found at high redshift.

Anyway, I thought perhaps it might be useful to explain a little bit about what difficulties this observation might pose for the standard “Big Bang” cosmological model. Our general understanding of galaxies form is that gravity gathers cold non-baryonic matter into clumps  into which “ordinary” baryonic material subsequently falls, eventually forming a luminous galaxy forms surrounded by a “halo” of (invisible) dark matter.  Quasars are galaxies in which enough baryonic matter has collected in the centre of the halo to build a supermassive black hole, which powers a short-lived phase of extremely high luminosity.

The key idea behind this picture is that the haloes form by hierarchical clustering: the first to form are small but  merge rapidly  into objects of increasing mass as time goes on. We have a fairly well-established theory of what happens with these haloes – called the Press-Schechter formalism – which allows us to calculate the number-density N(M,z) of objects of a given mass M as a function of redshift z. As an aside, it’s interesting to remark that the paper largely responsible for establishing the efficacy of this theory was written by George Efstathiou and Martin Rees in 1988, on the topic of high redshift quasars.

Anyway, this is how the mass function of haloes is predicted to evolve in the standard cosmological model; the different lines show the distribution as a function of redshift for redshifts from 0 (red) to 9 (violet):

Note   that the typical size of a halo increases with decreasing redshift, but it’s only at really high masses where you see a really dramatic effect. The plot is logarithmic, so the number density large mass haloes falls off by several orders of magnitude over the range of redshifts shown. The mass of the black hole responsible for the recently-detected high-redshift quasar is estimated to be about 1.2 \times 10^{10} M_{\odot}. But how does that relate to the mass of the halo within which it resides? Clearly the dark matter halo has to be more massive than the baryonic material it collects, and therefore more massive than the central black hole, but by how much?

This question is very difficult to answer, as it depends on how luminous the quasar is, how long it lives, what fraction of the baryons in the halo fall into the centre, what efficiency is involved in generating the quasar luminosity, etc.   Efstathiou and Rees argued that to power a quasar with luminosity of order 10^{13} L_{\odot} for a time order 10^{8} years requires a parent halo of mass about 2\times 10^{11} M_{\odot}.  Generally, i’s a reasonable back-of-an-envelope estimate that the halo mass would be about a hundred times larger than that of the central black hole so the halo housing this one could be around 10^{12} M_{\odot}.

You can see from the abundance of such haloes is down by quite a factor at redshift 7 compared to redshift 0 (the present epoch), but the fall-off is even more precipitous for haloes of larger mass than this. We really need to know how abundant such objects are before drawing definitive conclusions, and one object isn’t enough to put a reliable estimate on the general abundance, but with the discovery of this object  it’s certainly getting interesting. Haloes the size of a galaxy cluster, i.e.  10^{14} M_{\odot}, are rarer by many orders of magnitude at redshift 7 than at redshift 0 so if anyone ever finds one at this redshift that would really be a shock to many a cosmologist’s  system, as would be the discovery of quasars with such a high mass  at  redshifts significantly higher than seven.

Another thing worth mentioning is that, although there might be a sufficient number of potential haloes to serve as hosts for a quasar, there remains the difficult issue of understanding precisely how the black hole forms and especially how long it takes to do so. This aspect of the process of quasar formation is much more complicated than the halo distribution, so it’s probably on detailed models of  black-hole  growth that this discovery will have the greatest impact in the short term.

Illustris, Cosmology, and Simulation…

Posted in The Universe and Stuff with tags , , , , , , on May 8, 2014 by telescoper

There’s been quite a lot of news coverage over the last day or two emanating from a paper just out in the journal Nature by Vogelsberger et al. which describes a set of cosmological simulations called Illustris; see for example here and here.

The excitement revolves around the fact that Illustris represents a bit of a landmark, in that it’s the first hydrodynamical simulation with sufficient dynamical range that it is able to fully resolve the formation and evolution of  individual galaxies within the cosmic web of large-scale structure.

The simulations obviously represent a tremendous piece or work; they were run on supercomputers in France, Germany, and the USA; the largest of them was run on no less than 8,192 computer cores and took 19 million CPU hours. A single state-of-the-art desktop computer would require more than 2000 years to perform this calculation!

There’s even a video to accompany it (shame about the music):

The use of the word “simulation” always makes me smile. Being a crossword nut I spend far too much time looking in dictionaries but one often finds quite amusing things there. This is how the Oxford English Dictionary defines SIMULATION:

1.

a. The action or practice of simulating, with intent to deceive; false pretence, deceitful profession.

b. Tendency to assume a form resembling that of something else; unconscious imitation.

2. A false assumption or display, a surface resemblance or imitation, of something.

3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.

So it’s only the third entry that gives the meaning intended to be conveyed by the usage in the context of cosmological simulations. This is worth bearing in mind if you prefer old-fashioned analytical theory and want to wind up a simulationist! In football, of course, you can even get sent off for simulation…

Reproducing a reasonable likeness of something in a computer is not the same as understanding it, but that is not to say that these simulations aren’t incredibly useful and powerful, not just for making lovely pictures and videos but for helping to plan large scale survey programmes that can go and map cosmological structures on the same scale. Simulations of this scale are needed to help design observational and data analysis strategies for, e.g., the  forthcoming Euclid mission.

Cosmic Swirly Straws Feed Galaxy

Posted in The Universe and Stuff with tags , , , , , on June 5, 2013 by telescoper

I came across this video on youtube and was intrigued because the title seemed like a crossword clue (to which I couldn’t figure out the answer). It turns out that it goes with a piece in the Guardian which describes a computer simulation showing the formation of a galaxy during the first 2bn years of the Universe’s evolution. Those of us interested in cosmic structures on a larger scale than galaxies usually show such simulations in co-moving coordinates (i.e. in a box that expands at the same rate as the Universe), but this one is in physical coordinates showing the actual size of the objects therein; the galaxy is seen first to condense out of the expanding distribution of matter, but then grows by accreting matter in a complicated and rather beautiful way.

This calculation includes gravitational and hydrodynamical effects, allowing it to trace the separate behaviour of dark matter and gas (predominantly hydrogen).  You can see that this particular object forms very early on; the current age of the Universe is estimated to be about 13 – 14 billion years. When we look far into space using very big telescopes we see objects from which light has taken billion of years to reach us. We can therefore actually see galaxies as they were forming and can therefore test observationally whether they form as theory (and simulation) suggest.

Simulations and False Assumptions

Posted in The Universe and Stuff with tags , , , , on November 29, 2012 by telescoper

Just time for an afternoon quickie!

I saw this abstract by Smith et al. on the arXiv today:

Future large-scale structure surveys of the Universe will aim to constrain the cosmological model and the true nature of dark energy with unprecedented accuracy. In order for these surveys to achieve their designed goals, they will require predictions for the nonlinear matter power spectrum to sub-percent accuracy. Through the use of a large ensemble of cosmological N-body simulations, we demonstrate that if we do not understand the uncertainties associated with simulating structure formation, i.e. knowledge of the `true’ simulation parameters, and simply seek to marginalize over them, then the constraining power of such future surveys can be significantly reduced. However, for the parameters {n_s, h, Om_b, Om_m}, this effect can be largely mitigated by adding the information from a CMB experiment, like Planck. In contrast, for the amplitude of fluctuations sigma8 and the time-evolving equation of state of dark energy {w_0, w_a}, the mitigation is mild. On marginalizing over the simulation parameters, we find that the dark-energy figure of merit can be degraded by ~2. This is likely an optimistic assessment, since we do not take into account other important simulation parameters. A caveat is our assumption that the Hessian of the likelihood function does not vary significantly when moving from our adopted to the ‘true’ simulation parameter set. This paper therefore provides strong motivation for rigorous convergence testing of N-body codes to meet the future challenges of precision cosmology.

This paper asks an important question which I could paraphrase as “Do we trust N-body simulations too much?”.  The use of numerical codes in cosmology is widespread and there’s no question that they have driven the subject forward in many ways, not least because they can generate “mock” galaxy catalogues in order to help plan survey strategies. However, I’ve always worried that there is a tendency to trust these calculations too much. On the one hand there’s the question of small-scale resolution and on the other there’s the finite size of the computational volume. And there are other complications in between too. In other words, simulations are approximate. To some extent our ability to extract information from surveys will therefore be limited by the inaccuracy of our calculation of  the theoretical predictions.

Anyway,  the paper gives us quite a few things to think about and I think it might provoke a bit of discussion, which is why I mentioned it here – i.e. to encourage folk to read and give their opinions.

The use of the word “simulation” always makes me smile. Being a crossword nut I spend far too much time looking in dictionaries but one often finds quite amusing things there. This is how the Oxford English Dictionary defines SIMULATION:

1.

a. The action or practice of simulating, with intent to deceive; false pretence, deceitful profession.

b. Tendency to assume a form resembling that of something else; unconscious imitation.

2. A false assumption or display, a surface resemblance or imitation, of something.

3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.

So it’s only the third entry that gives the intended meaning. This is worth bearing in mind if you prefer old-fashioned analytical theory!

In football, of course, you can even get sent off for simulation…

A Grand Design Challenge

Posted in Astrohype, The Universe and Stuff with tags , , , , , on July 20, 2012 by telescoper

While I’m incarcerated at home I thought I might as well make myself useful by passing on an interesting news item I found on the BBC website. This relates to a paper in the latest edition of Nature that reports the discovery of what appears to be a classic “Grand Design” spiral galaxy at a redshift of 2.18. According to the standard big bang cosmology this means that the light we are seeing set out from this object over 10 billion years ago, so the object formed about 3 billion years after the big bang.

I found this image of the object – known to its friends as BX442 – and was blown away by it..

..until I saw the dreaded words “artist’s rendering”. The actual image is somewhat less impressive.

But what’s really interesting about the study reported in Nature are the questions it asks about how this object first into our understanding of spiral galaxy formation. According to the prevailing paradigm, galaxies form hierarchically by progressively merging smaller clumps into bigger ones. The general expectation is that at high redshift – corresponding to earlier stages of the formation process – galaxies are rather clumpy and disturbed; the spiral structure we see in nearby galaxies is rather flimsy and easily disturbed, so it’s quite surprising to see this one. Does BX442 live in an especially quiet environment? Have we seen few high-redshift spirals because they are rare, or because they are hard to find? Answers to these and other questions will only be found by doing systematic surveys to establish the frequency and distribution of objects like this, as well as the details of their internal kinematics.

Quite Interesting.

Milky Way Satellites and Dark Matter

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on May 4, 2012 by telescoper

I found a strange paper on the ArXiv last week, and was interested to see that it had been deemed to merit a press release from the Royal Astronomical Society that had been picked up by various sites across the interwebs.

The paper, to appear in due course in Monthly Notices of the Royal Astronomical Society, describes a study of the positions and velocities of small satellite galaxies and other object around the Milky Way, which suggest the existence of a flattened structure orientated at right angles to the Galactic plane. They call this the “Vast Polar Structure”. There’s even a nifty video showing this arrangement:

They argue that this is is evidence that these structures have a tidal origin, having been thrown out   in the collision between two smaller galaxies during the formation of the Milky Way. One would naively expect a much more isotropic distribution of material around our Galaxy if matter had fallen into it in the relatively quiescent way envisaged by more standard theoretical models.

Definitely Quite Interesting.

However, I was rather taken aback by this quotation by one of the authors, Pavel Kroupa, which ends the press release.

Our model appears to rule out the presence of dark matter in the universe, threatening a central pillar of current cosmological theory. We see this as the beginning of a paradigm shift, one that will ultimately lead us to a new understanding of the universe we inhabit.

Hang on a minute!

One would infer from this rather bold statement that the paper concerned contained a systematic comparison between the observations – allowing for selection effects, such as incomplete sky coverage – and detailed theoretical calculations of what is predicted in the standard theory of galaxy formation involving dark matter.

But it doesn’t.

What it does contain is a simple statistical calculation of the probability that the observed distribution of satellite galaxies would have arisen in an exactly isotropic distribution function, which they conclude to be around 0.2 per cent.

However, we already know that galaxies like the Milky Way are not exactly isotropic, so this isn’t really a test of the dark matter hypothesis. It’s a test of an idealised unrealistic model. And even if it were a more general test of the dark matter hypothesis, the probability of this hypothesis being correct is not what has been calculated. The probability of a model given the data is not the same as the probability of the data given the model. To get that you need Bayes’ theorem.

What needs to be done is to calculate the degree of anisotropy expected in the dark matter theory and in the tidal theory and then do a proper (i.e. Bayesian) comparison with the observations to see which model gives the better account of the data. This is not any easy thing to do because it necessitates doing detailed dynamical calculations at very high resolution of what galaxy like the Milky Way should look like according to both theories.

Until that’s done, these observations by no means “rule out” the dark matter theory.