It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting.

In this talk from a couple of months ago Volker Springel discusses Gadget-4 which is a parallel computational code that combines cosmological N-body and SPH code and is intended for simulations of cosmic structure formation and calculations relevant for galaxy evolution and galactic dynamics.

The predecessor of GADGET-2 is probably the most used computational code in cosmology; this talk discusses what new ideas are implemented in GADGET-4 to improve on the earlier version and what new features it has. Volker also explains what happened to GADGET-3!

I have from time to time posted videos from the series of Cosmology Talks curated by Shaun Hotchkiss. These are usually technical talks at the level you might expect for a cosmology seminar, but this time it’s something different. Shaun asked me if I’d like to give a talk about the Open Journal of Astrophysics, so one night last week we recorded this. We ended up chatting about quite a lot of things so it turned out longer than most of the videos in the series, but it’s not a technical talk so I hope you’ll find it bearable!

It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting. This one was published just yesterday.

In the talk Dan Thomas discusses his recent work first creating a framework for describing modified gravity (i.e. extensions of general relativity) in a model-independent way on non-linear scales and then running N-body simulations in that framework. The framework involves finding a correspondence between large scale linear theory where everything is under control and small scale non-linear post-Newtonian dynamics. After a lot of care and rigour it boils down to a modified Poisson equation – on both large and small scales (in a particular gauge). The full generality of the modification to the Poisson equation allows, essentially, for a time and space dependent value for Newton’s constant. For most modified gravity models, the first level of deviation from general relativity can be parametrised in this way. This approach allows the method to use to constrain modified gravity using observations without needing to run a new simulation for every step of a Monte Carlo parameter fit.

P. S. A couple of papers to go with this talk can be found here and here.

It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting. This is quite a recent one, from about a week ago.

In the talk, Alvaro Pozo tells us about a recent paper where he an collaborators detect the transition between a core (flat density profile) and halo (power law density profile) in dwarf galaxies. The full core + halo profile matches very closely what is expected in simulations of wave dark matter (sometimes called “fuzzy” dark matter), by which is meant dark matter consisting of a particle so light that its de Broglie wavelength is long enough to be astrophysically relevant. That is, there is a very flat core, which then drops off suddenly and then flattens off to a decaying power-law profile. The core matches the soliton expected in wave dark matter and the halo matches an outer NFW profile expected outside the soliton. They also detect evidence for tidal stripping of the matter in the galaxies. The galaxies closer to the centre of the Milky Way have their transition point between core and halo happen at smaller densities (despite the core density itself not being systematically smaller). The transition also appears to happen closer to the centre of the galaxy, which matches simulations. Of course the core+halo pattern they have clearly observed might be due to something else, but the match between wave dark matter simulations and observations is impressive. An important caveat is that the mass for the dark matter that they use is very small and in significant tension with Lyman Alpha constraints for wave-like dark matter. This might indicate that the source of this universal core+halo pattern they’re observing comes from something else, or it might indicate that the wave dark matter is more complicated than represented in the simplest models.

P. S. The papers that accompany this talk can be found here.

P.P.S. If you’re interested in wave dark matter there is a nice recent review article by Lam Hui here.

It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting. Since I haven’t posted any of these for a while I’ve got a few to catch up on – this one is from September 2020.

In this talk Marika Asgari tells us about the recent Kilo-Degree Survey (KiDS) cosmological results. These are the first results from KiDS after they have reached a sky coverage of 1000 square degrees. Marika first explains how they know that the results are “statistics dominated” and not “systematics dominated”, meaning that the dominant uncertainty comes from statistical errors, not systematic ones. She then presents the cosmological results, which primarily constrain the clumpiness of matter in the universe, and which therefore constrain Ω_{m} and σ_{8}. In the combined parameter “S_{8}“, which is constrained almost independently from Ω_{m} by their data they see a more than 3σ tension with the equivalent parameter one would infer from Planck.

P. S. The papers that accompany this talk can be found here and here.

It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting.

In this video, Eiichiro Komatsu and Yuto Minami talk about their recent work, first devising a way to extract a parity violating signature in the cosmic microwave background, as manifested by a form of birefringence. If the universe is birefringent then E-mode polarization would change into B-mode as electromagnetic radiation travels through space, so there would be a non-zero correlation between the two measured modes. They try to measure this correlation using the Planck 2018 data, getting a 2.4 sigma `hint’ of a result.

A problem with the measurement is that systematic errors, such as imperfectly calibrated detector angles, could mimic the signal. Yuto and Eiichiro’s idea was to measure the detector angle by looking at the E-B correlation in the foregrounds, where light hasn’t travelled far enough to be affected by any potential birefringence in the universe. They argue that this allows them to distinguish between the two types of measured E-B correlation. However, this is only the case if there is no intrinsic correlation between the E-mode and B-mode polarization in the foregrounds, which may not be the case, but which they are testing. The method can be applied to any of the plethora of CMB experiments currently underway so there will probably be more results soon that may shed further light on this issue.

Incidentally this reminds me of Cardiff days when work was going on about the same affect using the Quad instrument. I wasn’t involved with Quad but I do remember having interesting chats about the theory behind the measurement or upper limit as it was (which is reported here). Looking at the paper I realize that paper involved researchers from the Department of Experimental Physics at Maynooth University.

P. S. The paper that accompanies this talk can be found here.

It’s been too long since I shared one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting.

Anyway, although I’ve been too busy to check out the talks much recently I couldn’t resist sharing this one not only because it’s on a topic I find interesting (and have worked on) but also because one of the presenters (Mateja Gosença) is a former PhD student of mine from Sussex! So before I go fully into proud supervisor mode, I’ll just say that the talk is about AxioNyx, which is a new public code for simulating both ultralight (or “Fuzzy”, so called because its Compton de Broglie wavelength is large enough to be astrophysically relevant) dark matter (FDM) and Cold dark matter (CDM) simultaneously. The code simulates the FDM using adaptive mesh refinement and the CDM using N-body particles.

P. S. The paper that accompanies this talk can be found on the arXiv here.

Here is another one of those Cosmology Talks curated on YouTube by Shaun Hotchkiss.

In this talk, Clare Burrage of Nottingham University explains how chameleon dark energy models can be very tightly constrained by laboratory scale experiments (as opposed to particle accelerators and space missions). Chameleon models were popular for dark energy because their non-linear potentials generically create screening mechanisms, which stop them generating a fifth force despite their coupling to matter, the net effect of which is to make them hard to detect on Earth. On the other hand , in a suitably precise atomic experiment the screening can be minimised and the effect of the Chameleon field measured. Such an experiment has been constructed, and it rules out almost all of the viable parameter space where a chameleon model can explain dark energy.

The paper that accompanies this talk can be found here and the talk is here:

Here is another one of those Cosmology Talks curated on YouTube by Shaun Hotchkiss.

In the talk, Colin Hill explains how even though early dark energy can alleviate the Hubble tension, it does so at the expense of increasing other tension. Early dark energy can raise the predicted expansion rate inferred from the cosmic microwave background (CMB), by changing the sound horizon at the last scattering surface. However, the early dark energy also suppresses the growth of perturbations that are within the horizon while it is active. This mean that, in order to fit the CMB power spectrum the matter density must increase (and the spectral index becomes more blue tilted) and the amplitude of the matter power spectrum should get bigger. In their paper, Colin and his coauthors show that this affects the weak lensing measurements by DES, KiDS and HSC, so that including those experiments in a full data analysis makes things discordant again. The Hubble parameter is pulled back down, restoring most of the tension between local and CMB measurements of H0, and the tension in S_8 gets magnified by the increased mismatch in the predicted and measured matter power spectrum.

The overall moral of this story is the current cosmological models are so heavily constrained by the data that a relatively simple fix in one one part of the model space tends to cause problems elsewhere. It’s a bit like one of those puzzles in which you have to arrange all the pieces in a magic square but every time you move one bit you mess up the others.

The paper that accompanies this talk can be found here.

And here’s my long-running poll about the Hubble tension:

Here’s another example from the series of cosmology talks being curated by Shaun Hotchkiss. In this one, esteemed astronomer and Nobel Prize winner Adam Riess talks about what he and collaborators considered to be the leading candidate for a systematic error in the SHOES measurement of the expansion rate of the Universe. This is “Cepheid crowding”, the possibility that background sources change our interpretation of Cepheid brightness, ruining one step in the SHOES distance ladder. Riess and collaborators devise a nice way to test whether the crowding is correctly accounted for and find that it is, so crowding cannot be the “explanation” of an error in the distance ladder measurement of H0. Riess also stresses that both the early and late universe measurements of H0 are now backed up by multiple different measurements. Accordingly, if the resolution isn’t fundamental physics, then no single systematic can entirely solve the tension.

P. S. The paper that accompanies this talk can be found on the arXiv here.

The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be abusive will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.