Archive for galaxy formation

Merging Galaxies in the Early Universe

Posted in The Universe and Stuff with tags , , , , on November 14, 2017 by telescoper

I just saw this little movie circulated by the European Space Agency.

The  source displayed in the video was first identified by European Space Agency’s now-defunct Herschel Space Observatory, and later imaged with much higher resolution using the ground-based Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. It’s a significant discovery because it shows two large galaxies at quite high redshift (z=5.655) undergoing a major merger. According to the standard cosmological model this event occurred about a billion years after the Big Bang. The first galaxies are thought to have formed after a few hundred million years, but these objects are expected to have been be much smaller than present-day galaxies like the Milky Way. Major mergers of the type seen apparently seen here are needed if structures are to grow sufficiently rapidly, through hierarchical clustering, to produce what we see around us now, about 13.7 Gyrs after the Big Bang.

The ESA press release can be found here and for more expert readers the refereed paper (by Riechers et al.) can be found here (if you have a subscription to the Astrophysical Journal or for free on the arXiv here.

The abstract (which contains a lot of technical detail about the infra-red/millimetre/submillimetre observations involved in the study) reads:

We report the detection of ADFS-27, a dusty, starbursting major merger at a redshift of z=5.655, using the Atacama Large Millimeter/submillimeter Array (ALMA). ADFS-27 was selected from Herschel/SPIRE and APEX/LABOCA data as an extremely red “870 micron riser” (i.e., S_250<S_350<S_500<S_870), demonstrating the utility of this technique to identify some of the highest-redshift dusty galaxies. A scan of the 3mm atmospheric window with ALMA yields detections of CO(5-4) and CO(6-5) emission, and a tentative detection of H2O(211-202) emission, which provides an unambiguous redshift measurement. The strength of the CO lines implies a large molecular gas reservoir with a mass of M_gas=2.5×10^11(alpha_CO/0.8)(0.39/r_51) Msun, sufficient to maintain its ~2400 Msun/yr starburst for at least ~100 Myr. The 870 micron dust continuum emission is resolved into two components, 1.8 and 2.1 kpc in diameter, separated by 9.0 kpc, with comparable dust luminosities, suggesting an ongoing major merger. The infrared luminosity of L_IR~=2.4×10^13Lsun implies that this system represents a binary hyper-luminous infrared galaxy, the most distant of its kind presently known. This also implies star formation rate surface densities of Sigma_SFR=730 and 750Msun/yr/kpc2, consistent with a binary “maximum starburst”. The discovery of this rare system is consistent with a significantly higher space density than previously thought for the most luminous dusty starbursts within the first billion years of cosmic time, easing tensions regarding the space densities of z~6 quasars and massive quiescent galaxies at z>~3.

The word `riser’ refers to the fact that the measured flux increases with wavelength from the range of wavelengths measured by Herschel/Spire (250 to 500 microns) and up 870 microns. The follow-up observations with higher spectral resolution are based on identifications of carbon monoxide (CO) and water (H20) in the the spectra, which imply the existence of large quantities of gas capable of fuelling an extended period of star formation.

Clearly a lot was going on in this system, a long time ago and a long way away!



Dark Matter Day

Posted in History, The Universe and Stuff with tags , , , , , on October 31, 2017 by telescoper

As a welcome alternative to the tedium of Hallowe’en (which I usually post about in this fashion), I notice that today (31st October 2017) has been officially designated Dark Matter Day. I would have sent some appropriate greetings cards but I couldn’t find any in the shops…

All of which gives me the excuse to post this nice video which shows (among other things) how dark matter plays a role in the formation of galaxies:

P.S. Lest we forget, today is also the 500th anniversary of the day that Martin Luther knocked on the door of All Saints’ Church in Wittenberg and said `Trick or Theses?’ (Is this right? Ed.)

Galaxy Formation in the EAGLE Project

Posted in The Universe and Stuff with tags , , , on December 8, 2016 by telescoper

Yesterday I went to a nice Colloquium by Rob Crain of Liverpool John Moores University (which is in the Midlands). Here’s the abstract of his talk which was entitled
Cosmological hydrodynamical simulations of the galaxy population:

I will briefly recap the motivation for, and progress towards, numerical modelling of the formation and evolution of the galaxy population – from cosmological initial conditions at early epochs through to the present day. I will introduce the EAGLE project, a flagship program of such simulations conducted by the Virgo Consortium. These simulations represent a major development in the discipline, since they are the first to broadly reproduce the key properties of the evolving galaxy population, and do so using energetically-feasible feedback mechanisms. I shall present a broad range of results from analyses of the EAGLE simulation, concerning the evolution of galaxy masses, their luminosities and colours, and their atomic and molecular gas content, to convey some of the strengths and limitations of the current generation of numerical models.

I added the link to the EAGLE project so you can find more information. As one of the oldies in the audience I can’t help remembering the old days of the galaxy formation simulation game. When I started my PhD back in 1985 the state of the art was a gravity-only simulation of 323 particles in a box. Nowadays one can manage about 20003 particles at the same time aas having a good go at dealing not only with gravity but also the complex hydrodynamical processes involved in assembling a galaxy of stars, gas, dust and dark matter from a set of primordial fluctuations present in the early Universe. In these modern simulations one does not just track the mass distribution but also various themrmodynamic properties such as temperature, pressure, internal energy and entropy, which means that they require large supercomputers. This certainly isn’t a solved problem – different groups get results that differ by an order of magnitude in some key predictions – but the game has certainly moved on dramatically in the past thirty years or so.

Another thing that has certainly improved a lot is data visualization: here is a video of one of the EAGLE simulations, showing a region of the Universe about 25 MegaParsecs across. The gas is colour-coded for temperature. As the simulation evolves you can see the gas first condense into the filaments of the Cosmic Web, thereafter forming denser knots in which stars form and become galaxies, experiencing in some cases explosive events which expel the gas. It’s quite a messy business, which is why one has to do these things numerically rather than analytically, but it’s certainly fun to watch!

That Big Black Hole Story

Posted in The Universe and Stuff with tags , , , , , , , , on February 28, 2015 by telescoper

There’s been a lot of news coverage this week about a very big black hole, so I thought I’d post a little bit of background.  The paper describing the discovery of the object concerned appeared in Nature this week, but basically it’s a quasar at a redshift z=6.30. That’s not the record for such an object. Not long ago I posted an item about the discovery of a quasar at redshift 7.085, for example. But what’s interesting about this beastie is that it’s a very big beastie, with a central black hole estimated to have a mass of around 12 billion times the mass of the Sun, which is a factor of ten or more larger than other objects found at high redshift.

Anyway, I thought perhaps it might be useful to explain a little bit about what difficulties this observation might pose for the standard “Big Bang” cosmological model. Our general understanding of galaxies form is that gravity gathers cold non-baryonic matter into clumps  into which “ordinary” baryonic material subsequently falls, eventually forming a luminous galaxy forms surrounded by a “halo” of (invisible) dark matter.  Quasars are galaxies in which enough baryonic matter has collected in the centre of the halo to build a supermassive black hole, which powers a short-lived phase of extremely high luminosity.

The key idea behind this picture is that the haloes form by hierarchical clustering: the first to form are small but  merge rapidly  into objects of increasing mass as time goes on. We have a fairly well-established theory of what happens with these haloes – called the Press-Schechter formalism – which allows us to calculate the number-density N(M,z) of objects of a given mass M as a function of redshift z. As an aside, it’s interesting to remark that the paper largely responsible for establishing the efficacy of this theory was written by George Efstathiou and Martin Rees in 1988, on the topic of high redshift quasars.

Anyway, this is how the mass function of haloes is predicted to evolve in the standard cosmological model; the different lines show the distribution as a function of redshift for redshifts from 0 (red) to 9 (violet):

Note   that the typical size of a halo increases with decreasing redshift, but it’s only at really high masses where you see a really dramatic effect. The plot is logarithmic, so the number density large mass haloes falls off by several orders of magnitude over the range of redshifts shown. The mass of the black hole responsible for the recently-detected high-redshift quasar is estimated to be about 1.2 \times 10^{10} M_{\odot}. But how does that relate to the mass of the halo within which it resides? Clearly the dark matter halo has to be more massive than the baryonic material it collects, and therefore more massive than the central black hole, but by how much?

This question is very difficult to answer, as it depends on how luminous the quasar is, how long it lives, what fraction of the baryons in the halo fall into the centre, what efficiency is involved in generating the quasar luminosity, etc.   Efstathiou and Rees argued that to power a quasar with luminosity of order 10^{13} L_{\odot} for a time order 10^{8} years requires a parent halo of mass about 2\times 10^{11} M_{\odot}.  Generally, i’s a reasonable back-of-an-envelope estimate that the halo mass would be about a hundred times larger than that of the central black hole so the halo housing this one could be around 10^{12} M_{\odot}.

You can see from the abundance of such haloes is down by quite a factor at redshift 7 compared to redshift 0 (the present epoch), but the fall-off is even more precipitous for haloes of larger mass than this. We really need to know how abundant such objects are before drawing definitive conclusions, and one object isn’t enough to put a reliable estimate on the general abundance, but with the discovery of this object  it’s certainly getting interesting. Haloes the size of a galaxy cluster, i.e.  10^{14} M_{\odot}, are rarer by many orders of magnitude at redshift 7 than at redshift 0 so if anyone ever finds one at this redshift that would really be a shock to many a cosmologist’s  system, as would be the discovery of quasars with such a high mass  at  redshifts significantly higher than seven.

Another thing worth mentioning is that, although there might be a sufficient number of potential haloes to serve as hosts for a quasar, there remains the difficult issue of understanding precisely how the black hole forms and especially how long it takes to do so. This aspect of the process of quasar formation is much more complicated than the halo distribution, so it’s probably on detailed models of  black-hole  growth that this discovery will have the greatest impact in the short term.

Illustris, Cosmology, and Simulation…

Posted in The Universe and Stuff with tags , , , , , , on May 8, 2014 by telescoper

There’s been quite a lot of news coverage over the last day or two emanating from a paper just out in the journal Nature by Vogelsberger et al. which describes a set of cosmological simulations called Illustris; see for example here and here.

The excitement revolves around the fact that Illustris represents a bit of a landmark, in that it’s the first hydrodynamical simulation with sufficient dynamical range that it is able to fully resolve the formation and evolution of  individual galaxies within the cosmic web of large-scale structure.

The simulations obviously represent a tremendous piece or work; they were run on supercomputers in France, Germany, and the USA; the largest of them was run on no less than 8,192 computer cores and took 19 million CPU hours. A single state-of-the-art desktop computer would require more than 2000 years to perform this calculation!

There’s even a video to accompany it (shame about the music):

The use of the word “simulation” always makes me smile. Being a crossword nut I spend far too much time looking in dictionaries but one often finds quite amusing things there. This is how the Oxford English Dictionary defines SIMULATION:


a. The action or practice of simulating, with intent to deceive; false pretence, deceitful profession.

b. Tendency to assume a form resembling that of something else; unconscious imitation.

2. A false assumption or display, a surface resemblance or imitation, of something.

3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.

So it’s only the third entry that gives the meaning intended to be conveyed by the usage in the context of cosmological simulations. This is worth bearing in mind if you prefer old-fashioned analytical theory and want to wind up a simulationist! In football, of course, you can even get sent off for simulation…

Reproducing a reasonable likeness of something in a computer is not the same as understanding it, but that is not to say that these simulations aren’t incredibly useful and powerful, not just for making lovely pictures and videos but for helping to plan large scale survey programmes that can go and map cosmological structures on the same scale. Simulations of this scale are needed to help design observational and data analysis strategies for, e.g., the  forthcoming Euclid mission.

Cosmic Swirly Straws Feed Galaxy

Posted in The Universe and Stuff with tags , , , , , on June 5, 2013 by telescoper

I came across this video on youtube and was intrigued because the title seemed like a crossword clue (to which I couldn’t figure out the answer). It turns out that it goes with a piece in the Guardian which describes a computer simulation showing the formation of a galaxy during the first 2bn years of the Universe’s evolution. Those of us interested in cosmic structures on a larger scale than galaxies usually show such simulations in co-moving coordinates (i.e. in a box that expands at the same rate as the Universe), but this one is in physical coordinates showing the actual size of the objects therein; the galaxy is seen first to condense out of the expanding distribution of matter, but then grows by accreting matter in a complicated and rather beautiful way.

This calculation includes gravitational and hydrodynamical effects, allowing it to trace the separate behaviour of dark matter and gas (predominantly hydrogen).  You can see that this particular object forms very early on; the current age of the Universe is estimated to be about 13 – 14 billion years. When we look far into space using very big telescopes we see objects from which light has taken billion of years to reach us. We can therefore actually see galaxies as they were forming and can therefore test observationally whether they form as theory (and simulation) suggest.

Simulations and False Assumptions

Posted in The Universe and Stuff with tags , , , , on November 29, 2012 by telescoper

Just time for an afternoon quickie!

I saw this abstract by Smith et al. on the arXiv today:

Future large-scale structure surveys of the Universe will aim to constrain the cosmological model and the true nature of dark energy with unprecedented accuracy. In order for these surveys to achieve their designed goals, they will require predictions for the nonlinear matter power spectrum to sub-percent accuracy. Through the use of a large ensemble of cosmological N-body simulations, we demonstrate that if we do not understand the uncertainties associated with simulating structure formation, i.e. knowledge of the `true’ simulation parameters, and simply seek to marginalize over them, then the constraining power of such future surveys can be significantly reduced. However, for the parameters {n_s, h, Om_b, Om_m}, this effect can be largely mitigated by adding the information from a CMB experiment, like Planck. In contrast, for the amplitude of fluctuations sigma8 and the time-evolving equation of state of dark energy {w_0, w_a}, the mitigation is mild. On marginalizing over the simulation parameters, we find that the dark-energy figure of merit can be degraded by ~2. This is likely an optimistic assessment, since we do not take into account other important simulation parameters. A caveat is our assumption that the Hessian of the likelihood function does not vary significantly when moving from our adopted to the ‘true’ simulation parameter set. This paper therefore provides strong motivation for rigorous convergence testing of N-body codes to meet the future challenges of precision cosmology.

This paper asks an important question which I could paraphrase as “Do we trust N-body simulations too much?”.  The use of numerical codes in cosmology is widespread and there’s no question that they have driven the subject forward in many ways, not least because they can generate “mock” galaxy catalogues in order to help plan survey strategies. However, I’ve always worried that there is a tendency to trust these calculations too much. On the one hand there’s the question of small-scale resolution and on the other there’s the finite size of the computational volume. And there are other complications in between too. In other words, simulations are approximate. To some extent our ability to extract information from surveys will therefore be limited by the inaccuracy of our calculation of  the theoretical predictions.

Anyway,  the paper gives us quite a few things to think about and I think it might provoke a bit of discussion, which is why I mentioned it here – i.e. to encourage folk to read and give their opinions.

The use of the word “simulation” always makes me smile. Being a crossword nut I spend far too much time looking in dictionaries but one often finds quite amusing things there. This is how the Oxford English Dictionary defines SIMULATION:


a. The action or practice of simulating, with intent to deceive; false pretence, deceitful profession.

b. Tendency to assume a form resembling that of something else; unconscious imitation.

2. A false assumption or display, a surface resemblance or imitation, of something.

3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.

So it’s only the third entry that gives the intended meaning. This is worth bearing in mind if you prefer old-fashioned analytical theory!

In football, of course, you can even get sent off for simulation…