## Cosmology Examination Results

Posted in Education, Maynooth, The Universe and Stuff with tags , , , , on July 14, 2020 by telescoper

The examination season in Maynooth being now over, and the results having been issued, I thought I’d pass on the results not for individual students but for the Universe as a whole.

As you can see Dark Energy is top of the class, with a good II.1 (Upper Second Class). A few years ago this candidate looked likely to get a mark over 70% and thus get First Class Honours, but in the end fell just short. Given the steady performance and possible improvement in future I think this candidate will probably be one to reckon with in a future research career.

In second place, a long way behind on about 27%, is Dark Matter. This candidate only answered some of the questions asked, and those not very convincingly. Although reasonably strong on theory, the candidate didn’t show up at all in the laboratory. The result is a fail but there is an opportunity for a repeat at a future date, though there is some doubt as to whether the candidate would appear.

At the bottom of the class on a meagre 5% we find Ordinary Matter. It seems this candidate must have left the examination early and did not even give the correct name (baryons) on the script. Technically this one could repeat but even doing so is unlikely even to get an Ordinary Degree. I would suggest that baryons aren’t really cut out for cosmology and should make alternative plans for the future.

P.S. Photons and neutrinos ceased interacting with the course some time ago. Owing to this lack of engagement they are assumed to have dropped out, and their marks are not shown.

## Varun Sahni on Dark Matter & Dark Energy

Posted in The Universe and Stuff with tags , , , , on June 17, 2020 by telescoper

I’m very happy to be able to share a couple of lectures by esteemed cosmologist and erstwhile co-author Varun Sahni of the Inter University Centre for Astronomy & Astrophysics (IUCAA) in Pune, India. They’re at an introductory level appropriate for a summer school so I think quite a lot of students will find them interesting and informative!

## Cosmology Talks – Clare Burrage on Chameleon Dark Energy

Posted in The Universe and Stuff with tags , , , , , , on June 11, 2020 by telescoper

Here is another one of those Cosmology Talks curated on YouTube by Shaun Hotchkiss.

In this talk, Clare Burrage of Nottingham University explains how chameleon dark energy models can be very tightly constrained by laboratory scale experiments (as opposed to particle accelerators and space missions). Chameleon models were popular for dark energy because their non-linear potentials generically create screening mechanisms, which stop them generating a fifth force despite their coupling to matter, the net effect of which is to make them hard to detect on Earth. On the other hand , in a suitably precise atomic experiment the screening can be minimised and the effect of the Chameleon field measured. Such an experiment has been constructed, and it rules out almost all of the viable parameter space where a chameleon model can explain dark energy.

The paper that accompanies this talk can be found here and the talk is here:

## Cosmology Talks – Colin Hill on Early Dark Energy

Posted in The Universe and Stuff with tags , , , , , on June 2, 2020 by telescoper

Here is another one of those Cosmology Talks curated on YouTube by Shaun Hotchkiss.

In the talk, Colin Hill explains how even though early dark energy can alleviate the Hubble tension, it does so at the expense of increasing other tension. Early dark energy can raise the predicted expansion rate inferred from the cosmic microwave background (CMB), by changing the sound horizon at the last scattering surface. However, the early dark energy also suppresses the growth of perturbations that are within the horizon while it is active. This mean that, in order to fit the CMB power spectrum the matter density must increase (and the spectral index becomes more blue tilted) and the amplitude of the matter power spectrum should get bigger. In their paper, Colin and his coauthors show that this affects the weak lensing measurements by DES, KiDS and HSC, so that including those experiments in a full data analysis makes things discordant again. The Hubble parameter is pulled back down, restoring most of the tension between local and CMB measurements of H0, and the tension in S_8 gets magnified by the increased mismatch in the predicted and measured matter power spectrum.

The overall moral of this story is the current cosmological models are so heavily constrained by the data that a relatively simple fix in one one part of the model space tends to cause problems elsewhere. It’s a bit like one of those puzzles in which you have to arrange all the pieces in a magic square but every time you move one bit you mess up the others.

The paper that accompanies this talk can be found here.

And here’s my long-running poll about the Hubble tension:

## Voids, Galaxies and Cosmic Acceleration

Posted in The Universe and Stuff with tags , , , , , , on February 4, 2020 by telescoper

Time for a quick plug for a paper by Nadathur et al. that appeared on the arXiv recently with the title Testing low-redshift cosmic acceleration with large-scale structure. Here is the abstract:

You can make it bigger by clicking on the image. You can download a PDF of the entire paper here.

The particularly interesting thing about this result is that it gives strong evidence for models with a cosmological constant (or perhaps some other form of dark energy), in a manner that is independent of the other main cosmological constraints (i.e. the Cosmic Microwave Background or Type 1a Supernovae). This constraint is based on combining properties of void regions (underdensities) with Baryon Acoustic Oscillations (BAOs) to produce constraints that are stronger than those obtained using BAOs on their own. The data used derives largely from the BOSS survey.

As well as this there’s another intriguing result, or rather two results. First is that the the BAO+voids data from redshifts z<2 gives H0 = 72.3 ± 1.9, while, on the other hand adding, BAO information from the Lyman-alpha forest for from z>2 gives a value H0 = 69 \pm 1.2, favouring Planck over Riess. Once again, the tension’ over the value of the Hubble constant appears to be related to using nearby rather than distant sources.

## Luminosity Evolution in Type 1a Supernovae?

Posted in The Universe and Stuff with tags , , , , on January 14, 2020 by telescoper

Figure 1 of Kang et al.

During this afternoon’s very exciting Meeting of the Faculty of Science and Engineering at Maynooth University I suddenly remembered a paper I became aware of over Christmas but then forgot about. There’s an article here describing the paper that makes some pretty strong claims, which was what alerted me to it. The actual paper, by Kang et al., which has apparently been refereed and accepted for publication by the Astrophysical Journal, can be found on the arXiv here. The abstract reads:

The most direct and strongest evidence for the presence of dark energy is provided by the measurement of galaxy distances using type Ia supernovae (SNe Ia). This result is based on the assumption that the corrected brightness of SN Ia through the empirical standardization would not evolve with look-back time. Recent studies have shown, however, that the standardized brightness of SN Ia is correlated with host morphology, host mass, and local star formation rate, suggesting a possible correlation with stellar population property. In order to understand the origin of these correlations, we have continued our spectroscopic observations to cover most of the reported nearby early-type host galaxies. From high-quality (signal-to-noise ratio ~175) spectra, we obtained the most direct and reliable estimates of population age and metallicity for these host galaxies. We find a significant correlation between SN luminosity (after the standardization) and stellar population age at a 99.5% confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Based on this result, we further show that the previously reported correlations with host morphology, host mass, and local star formation rate are most likely originated from the difference in population age. This indicates that the light-curve fitters used by the SNe Ia community are not quite capable of correcting for the population age effect, which would inevitably cause a serious systematic bias with look-back time. Notably, taken at face values, a significant fraction of the Hubble residual used in the discovery of the dark energy appears to be affected by the luminosity evolution. We argue, therefore, that this systematic bias must be considered in detail in SN cosmology before proceeding to the details of the dark energy.

Of course evidence for significant luminosity evolution of Type Ia supernovae would throw a big spanner in the works involved in using these objects to probe cosmology (specifically dark energy), but having skimmed the paper I’m a bit skeptical about the results, largely because they seem to use only a very small number of supernovae to reach their conclusions and I’m not convinced about selection effects. I have an open mind, though, so I’d be very interested to hear through the comments box the views of any experts in this field.

## First Light at the Dark Energy Spectroscopic Instrument

Posted in The Universe and Stuff with tags , , , , , , on November 4, 2019 by telescoper

While I was away last week there was quite a lot of press coverage (e.g. here) about the new Dark Energy Spectroscopic Instrument, which has just seen first light. I didn’t have time to mention this until now, and in any case  I have little to add to the coverage that has already appeared, but it does give me the excuse to post this nice video – which features quite a few people I actually know! – to describe  the huge galaxy survey that DESI will perform. It’s hard to believe that when I started in the field in 1985 the largest such survey, which took several years to compile, had only a few thousand galaxies in it. The DESI instrument will be able to determine spectra of more sources than that in a single pointing of the telescope that lasts about 20 minutes. Overall it should determine redshifts of over 35 million galaxies! Vorsprung durch Technik.

## The Danger to Science from Hype

Posted in The Universe and Stuff with tags , , , , , , , on October 5, 2019 by telescoper

I came across an article in the Irish Times this morning entitled Hyping research runs risk of devaluing science‘. That piece is directly aimed at medical science and the distressing tendency of some researchers in that field to make extravagant claims about miracle cures’ that turn out to be a very long way from being scientifically tested. The combination of that article, yesterday’s blog post, and the fact that this year I’ve been speaking and writing a lot about the 1919 Eclipse expedition reminded me that I ended a book I wrote in 1998 with a discussion of the dangers to science of researchers being far too certain  and giving the impression that they are members of some sort priesthood that thinks it deals in absolute truths.

I decided to post the last few paragraphs of that book here because they talk about the responsibility scientists have to be honest about the limitations of their research and the uncertainties that surround any new discovery. Science has done great things for humanity, but it is fallible. Too many scientists are too certain about things that are far from proven. This can be damaging to science itself, as well as to the public perception of it. Bandwagons proliferate, stifling original ideas and leading to the construction of self-serving cartels. This is a fertile environment for conspiracy theories to flourish.

To my mind the thing  that really separates science from religion is that science is an investigative process, not a collection of truths. Each answer simply opens up more questions.  The public tends to see science as a collection of “facts” rather than a process of investigation. The scientific method has taught us a great deal about the way our Universe works, not through the exercise of blind faith but through the painstaking interplay of theory, experiment and observation.

This is what I wrote in 1998:

Science does not deal with ‘rights’ and ‘wrongs’. It deals instead with descriptions of reality that are either ‘useful’ or ‘not useful’. Newton’s theory of gravity was not shown to be ‘wrong’ by the eclipse expedition. It was merely shown that there were some phenomena it could not describe, and for which a more sophisticated theory was required. But Newton’s theory still yields perfectly reliable predictions in many situations, including, for example, the timing of total solar eclipses. When a theory is shown to be useful in a wide range of situations, it becomes part of our standard model of the world. But this doesn’t make it true, because we will never know whether future experiments may supersede it. It may well be the case that physical situations will be found where general relativity is supplanted by another theory of gravity. Indeed, physicists already know that Einstein’s theory breaks down when matter is so dense that quantum effects become important. Einstein himself realised that this would probably happen to his theory.

Putting together the material for this book, I was struck by the many parallels between the events of 1919 and coverage of similar topics in the newspapers of 1999. One of the hot topics for the media in January 1999, for example, has been the discovery by an international team of astronomers that distant exploding stars called supernovae are much fainter than had been predicted. To cut a long story short, this means that these objects are thought to be much further away than expected. The inference then is that not only is the Universe expanding, but it is doing so at a faster and faster rate as time passes. In other words, the Universe is accelerating. The only way that modern theories can account for this acceleration is to suggest that there is an additional source of energy pervading the very vacuum of space. These observations therefore hold profound implications for fundamental physics.

As always seems to be the case, the press present these observations as bald facts. As an astrophysicist, I know very well that they are far from unchallenged by the astronomical community. Lively debates about these results occur regularly at scientific meetings, and their status is far from established. In fact, only a year or two ago, precisely the same team was arguing for exactly the opposite conclusion based on their earlier data. But the media don’t seem to like representing science the way it actually is, as an arena in which ideas are vigorously debated and each result is presented with caveats and careful analysis of possible error. They prefer instead to portray scientists as priests, laying down the law without equivocation. The more esoteric the theory, the further it is beyond the grasp of the non-specialist, the more exalted is the priest. It is not that the public want to know – they want not to know but to believe.

Things seem to have been the same in 1919. Although the results from Sobral and Principe had then not received independent confirmation from other experiments, just as the new supernova experiments have not, they were still presented to the public at large as being definitive proof of something very profound. That the eclipse measurements later received confirmation is not the point. This kind of reporting can elevate scientists, at least temporarily, to the priesthood, but does nothing to bridge the ever-widening gap between what scientists do and what the public think they do.

As we enter a new Millennium, science continues to expand into areas still further beyond the comprehension of the general public. Particle physicists want to understand the structure of matter on tinier and tinier scales of length and time. Astronomers want to know how stars, galaxies  and life itself came into being. But not only is the theoretical ambition of science getting bigger. Experimental tests of modern particle theories require methods capable of probing objects a tiny fraction of the size of the nucleus of an atom. With devices such as the Hubble Space Telescope, astronomers can gather light that comes from sources so distant that it has taken most of the age of the Universe to reach us from them. But extending these experimental methods still further will require yet more money to be spent. At the same time that science reaches further and further beyond the general public, the more it relies on their taxes.

Many modern scientists themselves play a dangerous game with the truth, pushing their results one-sidedly into the media as part of the cut-throat battle for a share of scarce research funding. There may be short-term rewards, in grants and TV appearances, but in the long run the impact on the relationship between science and society can only be bad. The public responded to Einstein with unqualified admiration, but Big Science later gave the world nuclear weapons. The distorted image of scientist-as-priest is likely to lead only to alienation and further loss of public respect. Science is not a religion, and should not pretend to be one.

PS. You will note that I was voicing doubts about the interpretation of the early results from supernovae  in 1998 that suggested the universe might be accelerating and that dark energy might be the reason for its behaviour. Although more evidence supporting this interpretation has since emerged from WMAP and other sources, I remain skeptical that we cosmologists are on the right track about this. Don’t get me wrong – I think the standard cosmological model is the best working hypothesis we have – I just think we’re probably missing some important pieces of the puzzle. I may of course be wrong in this but, then again, so might everyone.

## Hubble Tension: an “Alternative” View?

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on July 25, 2019 by telescoper

There was a new paper last week on the arXiv by Sunny Vagnozzi about the Hubble constant controversy (see this blog passim). I was going to refrain from commenting but I see that one of the bloggers I follow has posted about it so I guess a brief item would not be out of order.

Here is the abstract of the Vagnozzi paper:

I posted this picture last week which is relevant to the discussion:

The point is that if you allow the equation of state parameter w to vary from the value of w=-1 that it has in the standard cosmology then you get a better fit. However, it is one of the features of Bayesian inference that if you introduce a new free parameter then you have to assign a prior probability over the space of values that parameter could hold. That prior penalty is carried through to the posterior probability. Unless the new model fits observational data significantly better than the old one, this prior penalty will lead to the new model being disfavoured. This is the Bayesian statement of Ockham’s Razor.

The Vagnozzi paper represents a statement of this in the context of the Hubble tension. If a new floating parameter w is introduced the data prefer a value less than -1 (as demonstrated in the figure) but on posterior probability grounds the resulting model is less probable than the standard cosmology for the reason stated above. Vagnozzi then argues that if a new fixed value of, say, w = -1.3 is introduced then the resulting model is not penalized by having to spread the prior probability out over a range of values but puts all its prior eggs in one basket labelled w = -1.3.

This is of course true. The problem is that the value of w = -1.3 does not derive from any ab initio principle of physics but by a posteriori of the inference described above. It’s no surprise that you can get a better answer if you know what outcome you want. I find that I am very good at forecasting the football results if I make my predictions after watching Final Score

Indeed, many cosmologists think any value of w < -1 should be ruled out ab initio because they don’t make physical sense anyway.

## Hubble’s Constant – A Postscript on w

Posted in The Universe and Stuff with tags , , , , , , , on July 15, 2019 by telescoper

Last week I posted about new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. You can download a PDF of the full paper here.

Reading the paper through over the weekend I was struck by Figure 6:

This shows the constraints on H0 and the parameter w which is used to describe the dark energy component. Bear in mind that these estimates of cosmological parameters actually involve the simultaneous estimation of several parameters, six in the case of the standard ΛCDM model. Incidentally, H0 is not one of the six basic parameters of the standard model – it is derived from the others – and some important cosmological observations are relatively insensitive to its value.

The parameter w is the equation of state parameter for the dark energy component so that the pressure p is related to the energy density ρc2 via p=wρc2. The fixed value w=-1 applies if the dark energy is of the form of a cosmological constant (or vacuum energy). I explained why here. Non-relativistic matter (dominated by rest-mass energy) has w=0 while ultra-relativistic matter has w=1/3.

Applying the cosmological version of the thermodynamic relation for adiabatic expansion  “dE=-pdV” one finds that ρ ∼ a-3(1+w) where a is the cosmic scale factor. Note that w=-1 gives a constant energy density as the Universe expands (the cosmological constant); w=0 gives ρ ∼ a-3, as expected for ordinary’ matter.

As I already mentioned, in the standard cosmological model w is fixed at  w=-1 but if it is treated as a free parameter then it can be added to the usual six to produce the Figure shown above. I should add for Bayesians that this plot shows the posterior probability assuming a uniform prior on w.

What is striking is that the data seem to prefer a very low value of w. Indeed the peak of the likelihood (which determines the peak of the posterior probability if the prior is flat) appears to be off the bottom of the plot. It must be said that the size of the black contour lines (at one sigma and two sigma for dashed and solid lines respectively) suggests that these data aren’t really very informative; the case w=-1 is well within the 2σ contour. In other words, one might get a slightly better fit by allowing the equation of state parameter to float, but the quality of the fit might not improve sufficiently to justify the introduction of another parameter.

Nevertheless it is worth mentioning that if it did turn out, for example, that w=-2 that would imply ρ ∼ a+3, i.e. an energy density that increases steeply as a increases (i.e. as the Universe expands). That would be pretty wild!

On the other hand, there isn’t really any physical justification for cases with w<-1 (in terms of a plausible model) which, in turn, makes me doubt the reasonableness of imposing a flat prior. My own opinion is that if dark energy turns out not to be of the simple form of a cosmological constant then it is likely to be too complicated to be expressed in terms of a single number anyway.

Postscript to this postscript: take a look at this paper from 2002!