Archive for galaxy formation

How big were the biggest galaxies in the early Universe?

Posted in Biographical, Cardiff, The Universe and Stuff with tags , , , , , , , on August 23, 2022 by telescoper

Once upon a time (over a decade ago when I was still in Cardiff), I wrote a paper with PhD student Ian Harrison on the biggest (most massive) galaxy clusters. I even wrote a blog post about it. It was based on an interesting branch of statistical theory called extreme value statistics which I posted about in general terms here.

Well now the recent spate of observations of high-redshift galaxies by the James Webb Space Telescope has inspired Chris Lovell (who was a student at Cardiff back in the day then moved to Sussex to do his PhD and is now at the University of Hertfordshire) and Ian Harrison (who is back in Cardiff as a postdoc after a spell in the Midlands), and others at Cambridge and Sussex, to apply the extreme value statistics idea not to clusters but to galaxies. Here is the abstract:

The basic idea of galaxy formation in the standard ΛCDM cosmological model is that galaxies form in dark matter haloes that grow hierarchically so that the typical size of galaxies increases with time. The most massive haloes at high redshift should therefore be less massive than the most massive haloes at low redshift, as neatly illustrated by this figure, which shows the theoretical halo mass function (solid lines) and the predicted distribution of the most massive halo (dashed lines) at a number of redshifts, for a fixed volume of 100 Mpc3.

The colour-coding is with redshift as per the legend, with light blue the highest (z=16).

Of course we don’t observe the halo mass directly and the connection between this mass and the luminosity of a galaxy sitting in it is likely to be complicated because the formation of the stars that produce the light is a rather messy process; the ratio of mass to light is consequently hard to predict. Moreover we don’t even have overwhelmingly convincing measurements of the redshifts yet. A brief summary of the conclusions of this paper, however, is that is some of the big early galaxies recently observed by JWST seem to be a big too big for comfort if we take their observed properties at face value. A lot more observational work will be needed, however, before we can draw definite conclusions about whether the standard model is consistent with these new observations.

Sins of Omission

Posted in The Universe and Stuff with tags , , , , , on February 20, 2022 by telescoper

There’s a paper recently published in Nature Astronomy by Moreno et al, which you can find on the arXiv here. The title is Galaxies lacking dark matter produced by close encounters in a cosmological simulation and the abstract is here:

The standard cold dark matter plus cosmological constant model predicts that galaxies form within dark-matter haloes, and that low-mass galaxies are more dark-matter dominated than massive ones. The unexpected discovery of two low-mass galaxies lacking dark matter immediately provoked concerns about the standard cosmology and ignited explorations of alternatives, including self-interacting dark matter and modified gravity. Apprehension grew after several cosmological simulations using the conventional model failed to form adequate numerical analogues with comparable internal characteristics (stellar masses, sizes, velocity dispersions and morphologies). Here we show that the standard paradigm naturally produces galaxies lacking dark matter with internal characteristics in agreement with observations. Using a state-of-the-art cosmological simulation and a meticulous galaxy-identification technique, we find that extreme close encounters with massive neighbours can be responsible for this. We predict that approximately 30 percent of massive central galaxies (with at least 1011 solar masses in stars) harbour at least one dark-matter-deficient satellite (with 108 – 109 solar masses in stars). This distinctive class of galaxies provides an additional layer in our understanding of the role of interactions in shaping galactic properties. Future observations surveying galaxies in the aforementioned regime will provide a crucial test of this scenario.

It’s quite an interesting result.

I’m reminded of this very well known paper from way back in 1998, available on arXiv here, by Priya Natarajan, Steinn Sigurdsson and Joe Silk, with the abstract:

We propose a scenario for the formation of a population of baryon-rich, dark matter-deficient dwarf galaxies at high redshift that form from the mass swept out in the Intergalactic Medium (IGM) by energetic outflows from luminous quasars. We predict the intrinsic properties of these galaxies, and examine the prospects for their observational detection in the optical, X-ray and radio wavebands. Detectable thermal Sunyaev-Zeldovich decrements (cold spots) on arc-minute scales in the cosmic microwave background radiation maps are expected during the shock-heated expanding phase from these hot bubbles. We conclude that the optimal detection strategy for these dwarfs is via narrow-band Lyman-α imaging of regions around high redshift quasars. An energetically scaled-down version of the same model is speculated upon as a possible mechanism for forming pre-galactic globular clusters.

It’s true that the detailed mechanism for forming dwarf galaxies with low dark matter densities is different in the two papers, but it does show that the issue being addressed by Moreno et al. had been addressed before. It seems to me therefore that the Natarajan et al. paper is clearly relevant background to the Moreno et al. one. I always tell junior colleagues to cite all relevant literature. I wonder why Moreno et al. decided not to do that with this paper?

Had Moreno et al. preprinted their paper before acceptance by Nature Astronomy I’m sure someone would have told them of this omission. This is yet another reason for submitting your papers to arXiv at the same time as you submit them to a journal rather than waiting for them to be published.

New Publication at the Open Journal of Astrophysics

Posted in OJAp Papers, Open Access, The Universe and Stuff with tags , , , on August 13, 2021 by telescoper

Back from my short trip, I now have time to announce another publication in the Open Journal of Astrophysics. This one was published at the end of last month, but owing to the holiday season there was a delay in activating the DOI and registering the metadata  so I have delayed posting about it until just now. It is the seventh paper in Volume 4 (2021) and the 38th in all.

The latest publication is entitled A Differentiable Model of the Assembly of Individual and Populations of Dark Matter Halos. The authors are Andrew P. Hearin,  Jonás Chaves-Montero, Matthew R. Becker and Alex Alarcon, all of the Argonne National Laboratory.

Here is a screen grab of the overlay which includes the abstract:

You can click on the image to make it larger should you wish to do so. You can find the arXiv version of the paper here. This one is also in the folder marked Cosmology and Nongalactic Astrophysics.

We’ve had a bit of a surge in submissions over the last few weeks – no doubt due to authors using their “vacation” to finish off papers. August is not the best month for finding referees, but we’ll do our best to process them quickly!

Catching up on Cosmic Dawn

Posted in The Universe and Stuff with tags , , , , , on June 25, 2021 by telescoper

Trying to catch up on cosmological news after a busy week I came across a number of pieces in the media about “Cosmic Dawn” (e.g. here in The Grauniad). I’ve never actually met Cosmic Dawn but she seems like an interesting lady.

But seriously folks, Cosmic Dawn refers to the epoch during which the first stars formed in the expanding Universe lighting up the Universe after a few hundred million years of post-recombination darkness.

According to the Guardian article mentioned above the new results being discussed are published in Monthly Notices of the Royal Astronomical Society but they’re actually not. Yet. Nevertheless the paper (by Laporte et al.) is available on the arXiv which is where people will actually read it…

Anyway, here is the abstract:

Here is a composite of HST and ALMA images for one of the objects discussed in the paper (MACS0416-JD):

I know it looks a bit blobby but it’s not easy to resolve things at such huge distances! Also, it’s quite small because it’s far away. In any case the spectroscopy is really the important thing, not the images, as that is what determines the redshift. The Universe has expanded by a factor 10 since light set out towards us from an object at redshift 9. I’m old enough to remember when “high redshift” meant z~0.1!

At the end of my talk on Wednesday Floyd Stecker asked me about what the James Webb Space Telescope (due for launch later this year) would do for cosmology and I replied that it would probably do a lot more for galaxy formation and evolution than cosmology per se. I think this is a good illustration of what I meant. Because of its infrared capability JWST will allow astronomers to push back even further and learn even more about how the first stars formed, but it won’t tell us much directly about dark matter and dark energy.

Cosmology Talks: Volker Springel on GADGET-4

Posted in The Universe and Stuff with tags , , , , , , , on May 18, 2021 by telescoper

It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting.

In this talk from a couple of months ago  Volker Springel discusses Gadget-4 which is a parallel computational code that combines cosmological N-body and SPH code and is intended for simulations of cosmic structure formation and calculations relevant for galaxy evolution and galactic dynamics.

The predecessor of GADGET-2 is probably the most used computational code in cosmology; this talk discusses what new ideas are implemented in GADGET-4 to improve on the earlier version and what new features it has.  Volker also explains what happened to GADGET-3!

The paper describing Gadget-4 can be found here.

 

New Publication at the Open Journal of Astrophysics!

Posted in Maynooth, OJAp Papers, Open Access, The Universe and Stuff with tags , , , , , , on August 24, 2020 by telescoper

So another new paper has been published in the Open Journal of Astrophysics! This one is in the folder marked Astrophysics of Galaxies and is entitled Massive Star Formation in Metal-Enriched Haloes at High Redshift. I should explain that “Metal” here is the astrophysicist’s definition which basically means anything heavier than hydrogen or helium: chemists may look away now.

The authors of this paper are John Regan (of the Department of Theoretical Physics at Maynooth University), Zoltán Haiman (Columbia), John Wise (Georgia Tech), Brian O’Shea (Michigan State) and Michael Norman (UCSD). And before anyone asks, no I don’t force members of staff in my Department to submit papers to the Open Journal of Astrophysics and yes I did stand aside from the Editorial process because of the institutional conflict.

Here is a screen grab of the overlay:

You can click on the image to make it larger should you wish to do so.

You can find the arXiv version of the paper here.

Chaos and Variance in (Simulations of) Galaxy Formation

Posted in The Universe and Stuff with tags , , , on September 11, 2019 by telescoper

During yesterday’s viva voce examination a paper came up that I missed when it came out last year. It’s by Keller et al. called Chaos and Variance in Galaxy Formation. The abstract reads:

The evolution of galaxies is governed by equations with chaotic solutions: gravity and compressible hydrodynamics. While this micro-scale chaos and stochasticity has been well studied, it is poorly understood how it couples to macro-scale properties examined in simulations of galaxy formation. In this paper, we show how perturbations introduced by floating-point roundoff, random number generators, and seemingly trivial differences in algorithmic behaviour can produce non-trivial differences in star formation histories, circumgalactic medium (CGM) properties, and the distribution of stellar mass. We examine the importance of stochasticity due to discreteness noise, variations in merger timings and how self-regulation moderates the effects of this stochasticity. We show that chaotic variations in stellar mass can grow until halted by feedback-driven self-regulation or gas exhaustion. We also find that galaxy mergers are critical points from which large (as much as a factor of 2) variations in quantities such as the galaxy stellar mass can grow. These variations can grow and persist for more than a Gyr before regressing towards the mean. These results show that detailed comparisons of simulations require serious consideration of the magnitude of effects compared to run-to-run chaotic variation, and may significantly complicate interpreting the impact of different physical models. Understanding the results of simulations requires us to understand that the process of simulation is not a mapping of an infinitesimal point in configuration space to another, final infinitesimal point. Instead, simulations map a point in a space of possible initial conditions points to a volume of possible final states.

(The highlighting is mine.) I find this analysis pretty scary, actually, as it shows that numerical effects (including just running the code on different processors) can have an enormous impact on the outputs of these simulations. Here’s Figure 14 for example:

This shows the predicted stellar surface mass density in a number of simulations: the outputs vary by more than an order of magnitude!

This paper underlines an important question which I have worried about before, and could paraphrase as “Do we trust N-body simulations too much?”. The use of numerical codes in cosmology is widespread and there’s no question that they have driven the subject forward in many ways, not least because they can generate “mock” galaxy catalogues in order to help plan survey strategies. However, I’ve always been concerned that there is a tendency to trust these calculations too much. On the one hand there’s the question of small-scale resolution and on the other there’s the finite size of the computational volume. And there are other complications in between too. In other words, simulations are approximate. To some extent our ability to extract information from surveys will therefore be limited by the inaccuracy of our calculation of the theoretical predictions.

Anyway, the paper gives us quite a few things to think about and I think it might provoke a bit of discussion, which is why I mentioned it here – i.e. to encourage folk to read and give their opinions.

The use of the word “simulation” always makes me smile. Being a crossword nut I spend far too much time looking in dictionaries but one often finds quite amusing things there. This is how the Oxford English Dictionary defines SIMULATION:

1.

a. The action or practice of simulating, with intent to deceive; false pretence, deceitful profession.

b. Tendency to assume a form resembling that of something else; unconscious imitation.

2. A false assumption or display, a surface resemblance or imitation, of something.

3. The technique of imitating the behaviour of some situation or process (whether economic, military, mechanical, etc.) by means of a suitably analogous situation or apparatus, esp. for the purpose of study or personnel training.

So it’s only the third entry that gives the intended meaning. This is worth bearing in mind if you prefer old-fashioned analytical theory!

In football, of course, you can even get sent off for simulation…

Merging Galaxies in the Early Universe

Posted in The Universe and Stuff with tags , , , , on November 14, 2017 by telescoper

I just saw this little movie circulated by the European Space Agency.

The  source displayed in the video was first identified by European Space Agency’s now-defunct Herschel Space Observatory, and later imaged with much higher resolution using the ground-based Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. It’s a significant discovery because it shows two large galaxies at quite high redshift (z=5.655) undergoing a major merger. According to the standard cosmological model this event occurred about a billion years after the Big Bang. The first galaxies are thought to have formed after a few hundred million years, but these objects are expected to have been be much smaller than present-day galaxies like the Milky Way. Major mergers of the type seen apparently seen here are needed if structures are to grow sufficiently rapidly, through hierarchical clustering, to produce what we see around us now, about 13.7 Gyrs after the Big Bang.

The ESA press release can be found here and for more expert readers the refereed paper (by Riechers et al.) can be found here (if you have a subscription to the Astrophysical Journal or for free on the arXiv here.

The abstract (which contains a lot of technical detail about the infra-red/millimetre/submillimetre observations involved in the study) reads:

We report the detection of ADFS-27, a dusty, starbursting major merger at a redshift of z=5.655, using the Atacama Large Millimeter/submillimeter Array (ALMA). ADFS-27 was selected from Herschel/SPIRE and APEX/LABOCA data as an extremely red “870 micron riser” (i.e., S_250<S_350<S_500<S_870), demonstrating the utility of this technique to identify some of the highest-redshift dusty galaxies. A scan of the 3mm atmospheric window with ALMA yields detections of CO(5-4) and CO(6-5) emission, and a tentative detection of H2O(211-202) emission, which provides an unambiguous redshift measurement. The strength of the CO lines implies a large molecular gas reservoir with a mass of M_gas=2.5×10^11(alpha_CO/0.8)(0.39/r_51) Msun, sufficient to maintain its ~2400 Msun/yr starburst for at least ~100 Myr. The 870 micron dust continuum emission is resolved into two components, 1.8 and 2.1 kpc in diameter, separated by 9.0 kpc, with comparable dust luminosities, suggesting an ongoing major merger. The infrared luminosity of L_IR~=2.4×10^13Lsun implies that this system represents a binary hyper-luminous infrared galaxy, the most distant of its kind presently known. This also implies star formation rate surface densities of Sigma_SFR=730 and 750Msun/yr/kpc2, consistent with a binary “maximum starburst”. The discovery of this rare system is consistent with a significantly higher space density than previously thought for the most luminous dusty starbursts within the first billion years of cosmic time, easing tensions regarding the space densities of z~6 quasars and massive quiescent galaxies at z>~3.

The word `riser’ refers to the fact that the measured flux increases with wavelength from the range of wavelengths measured by Herschel/Spire (250 to 500 microns) and up 870 microns. The follow-up observations with higher spectral resolution are based on identifications of carbon monoxide (CO) and water (H20) in the the spectra, which imply the existence of large quantities of gas capable of fuelling an extended period of star formation.

Clearly a lot was going on in this system, a long time ago and a long way away!

 

Dark Matter Day

Posted in History, The Universe and Stuff with tags , , , , , on October 31, 2017 by telescoper

As a welcome alternative to the tedium of Hallowe’en (which I usually post about in this fashion), I notice that today (31st October 2017) has been officially designated Dark Matter Day. I would have sent some appropriate greetings cards but I couldn’t find any in the shops…

All of which gives me the excuse to post this nice video which shows (among other things) how dark matter plays a role in the formation of galaxies:

P.S. Lest we forget, today is also the 500th anniversary of the day that Martin Luther knocked on the door of All Saints’ Church in Wittenberg and said `Trick or Theses?’ (Is this right? Ed.)

Galaxy Formation in the EAGLE Project

Posted in The Universe and Stuff with tags , , , on December 8, 2016 by telescoper

Yesterday I went to a nice Colloquium by Rob Crain of Liverpool John Moores University (which is in the Midlands). Here’s the abstract of his talk which was entitled
Cosmological hydrodynamical simulations of the galaxy population:

I will briefly recap the motivation for, and progress towards, numerical modelling of the formation and evolution of the galaxy population – from cosmological initial conditions at early epochs through to the present day. I will introduce the EAGLE project, a flagship program of such simulations conducted by the Virgo Consortium. These simulations represent a major development in the discipline, since they are the first to broadly reproduce the key properties of the evolving galaxy population, and do so using energetically-feasible feedback mechanisms. I shall present a broad range of results from analyses of the EAGLE simulation, concerning the evolution of galaxy masses, their luminosities and colours, and their atomic and molecular gas content, to convey some of the strengths and limitations of the current generation of numerical models.

I added the link to the EAGLE project so you can find more information. As one of the oldies in the audience I can’t help remembering the old days of the galaxy formation simulation game. When I started my PhD back in 1985 the state of the art was a gravity-only simulation of 323 particles in a box. Nowadays one can manage about 20003 particles at the same time aas having a good go at dealing not only with gravity but also the complex hydrodynamical processes involved in assembling a galaxy of stars, gas, dust and dark matter from a set of primordial fluctuations present in the early Universe. In these modern simulations one does not just track the mass distribution but also various themrmodynamic properties such as temperature, pressure, internal energy and entropy, which means that they require large supercomputers. This certainly isn’t a solved problem – different groups get results that differ by an order of magnitude in some key predictions – but the game has certainly moved on dramatically in the past thirty years or so.

Another thing that has certainly improved a lot is data visualization: here is a video of one of the EAGLE simulations, showing a region of the Universe about 25 MegaParsecs across. The gas is colour-coded for temperature. As the simulation evolves you can see the gas first condense into the filaments of the Cosmic Web, thereafter forming denser knots in which stars form and become galaxies, experiencing in some cases explosive events which expel the gas. It’s quite a messy business, which is why one has to do these things numerically rather than analytically, but it’s certainly fun to watch!