Archive for Dark Energy

First Light at the Dark Energy Spectroscopic Instrument

Posted in The Universe and Stuff with tags , , , , , , on November 4, 2019 by telescoper

While I was away last week there was quite a lot of press coverage (e.g. here) about the new Dark Energy Spectroscopic Instrument, which has just seen first light. I didn’t have time to mention this until now, and in any case  I have little to add to the coverage that has already appeared, but it does give me the excuse to post this nice video – which features quite a few people I actually know! – to describe  the huge galaxy survey that DESI will perform. It’s hard to believe that when I started in the field in 1985 the largest such survey, which took several years to compile, had only a few thousand galaxies in it. The DESI instrument will be able to determine spectra of more sources than that in a single pointing of the telescope that lasts about 20 minutes. Overall it should determine redshifts of over 35 million galaxies! Vorsprung durch Technik.

 

 

The Danger to Science from Hype

Posted in The Universe and Stuff with tags , , , , , , , on October 5, 2019 by telescoper

I came across an article in the Irish Times this morning entitled `Hyping research runs risk of devaluing science‘. That piece is directly aimed at medical science and the distressing tendency of some researchers in that field to make extravagant claims about `miracle cures’ that turn out to be a very long way from being scientifically tested. The combination of that article, yesterday’s blog post, and the fact that this year I’ve been speaking and writing a lot about the 1919 Eclipse expedition reminded me that I ended a book I wrote in 1998 with a discussion of the dangers to science of researchers being far too certain  and giving the impression that they are members of some sort priesthood that thinks it deals in absolute truths.

I decided to post the last few paragraphs of that book here because they talk about the responsibility scientists have to be honest about the limitations of their research and the uncertainties that surround any new discovery. Science has done great things for humanity, but it is fallible. Too many scientists are too certain about things that are far from proven. This can be damaging to science itself, as well as to the public perception of it. Bandwagons proliferate, stifling original ideas and leading to the construction of self-serving cartels. This is a fertile environment for conspiracy theories to flourish.

To my mind the thing  that really separates science from religion is that science is an investigative process, not a collection of truths. Each answer simply opens up more questions.  The public tends to see science as a collection of “facts” rather than a process of investigation. The scientific method has taught us a great deal about the way our Universe works, not through the exercise of blind faith but through the painstaking interplay of theory, experiment and observation.

This is what I wrote in 1998:

Science does not deal with ‘rights’ and ‘wrongs’. It deals instead with descriptions of reality that are either ‘useful’ or ‘not useful’. Newton’s theory of gravity was not shown to be ‘wrong’ by the eclipse expedition. It was merely shown that there were some phenomena it could not describe, and for which a more sophisticated theory was required. But Newton’s theory still yields perfectly reliable predictions in many situations, including, for example, the timing of total solar eclipses. When a theory is shown to be useful in a wide range of situations, it becomes part of our standard model of the world. But this doesn’t make it true, because we will never know whether future experiments may supersede it. It may well be the case that physical situations will be found where general relativity is supplanted by another theory of gravity. Indeed, physicists already know that Einstein’s theory breaks down when matter is so dense that quantum effects become important. Einstein himself realised that this would probably happen to his theory.

Putting together the material for this book, I was struck by the many parallels between the events of 1919 and coverage of similar topics in the newspapers of 1999. One of the hot topics for the media in January 1999, for example, has been the discovery by an international team of astronomers that distant exploding stars called supernovae are much fainter than had been predicted. To cut a long story short, this means that these objects are thought to be much further away than expected. The inference then is that not only is the Universe expanding, but it is doing so at a faster and faster rate as time passes. In other words, the Universe is accelerating. The only way that modern theories can account for this acceleration is to suggest that there is an additional source of energy pervading the very vacuum of space. These observations therefore hold profound implications for fundamental physics.

As always seems to be the case, the press present these observations as bald facts. As an astrophysicist, I know very well that they are far from unchallenged by the astronomical community. Lively debates about these results occur regularly at scientific meetings, and their status is far from established. In fact, only a year or two ago, precisely the same team was arguing for exactly the opposite conclusion based on their earlier data. But the media don’t seem to like representing science the way it actually is, as an arena in which ideas are vigorously debated and each result is presented with caveats and careful analysis of possible error. They prefer instead to portray scientists as priests, laying down the law without equivocation. The more esoteric the theory, the further it is beyond the grasp of the non-specialist, the more exalted is the priest. It is not that the public want to know – they want not to know but to believe.

Things seem to have been the same in 1919. Although the results from Sobral and Principe had then not received independent confirmation from other experiments, just as the new supernova experiments have not, they were still presented to the public at large as being definitive proof of something very profound. That the eclipse measurements later received confirmation is not the point. This kind of reporting can elevate scientists, at least temporarily, to the priesthood, but does nothing to bridge the ever-widening gap between what scientists do and what the public think they do.

As we enter a new Millennium, science continues to expand into areas still further beyond the comprehension of the general public. Particle physicists want to understand the structure of matter on tinier and tinier scales of length and time. Astronomers want to know how stars, galaxies  and life itself came into being. But not only is the theoretical ambition of science getting bigger. Experimental tests of modern particle theories require methods capable of probing objects a tiny fraction of the size of the nucleus of an atom. With devices such as the Hubble Space Telescope, astronomers can gather light that comes from sources so distant that it has taken most of the age of the Universe to reach us from them. But extending these experimental methods still further will require yet more money to be spent. At the same time that science reaches further and further beyond the general public, the more it relies on their taxes.

Many modern scientists themselves play a dangerous game with the truth, pushing their results one-sidedly into the media as part of the cut-throat battle for a share of scarce research funding. There may be short-term rewards, in grants and TV appearances, but in the long run the impact on the relationship between science and society can only be bad. The public responded to Einstein with unqualified admiration, but Big Science later gave the world nuclear weapons. The distorted image of scientist-as-priest is likely to lead only to alienation and further loss of public respect. Science is not a religion, and should not pretend to be one.

PS. You will note that I was voicing doubts about the interpretation of the early results from supernovae  in 1998 that suggested the universe might be accelerating and that dark energy might be the reason for its behaviour. Although more evidence supporting this interpretation has since emerged from WMAP and other sources, I remain skeptical that we cosmologists are on the right track about this. Don’t get me wrong – I think the standard cosmological model is the best working hypothesis we have – I just think we’re probably missing some important pieces of the puzzle. I may of course be wrong in this but, then again, so might everyone.

 

 

 

Hubble Tension: an “Alternative” View?

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on July 25, 2019 by telescoper

There was a new paper last week on the arXiv by Sunny Vagnozzi about the Hubble constant controversy (see this blog passim). I was going to refrain from commenting but I see that one of the bloggers I follow has posted about it so I guess a brief item would not be out of order.

Here is the abstract of the Vagnozzi paper:

I posted this picture last week which is relevant to the discussion:

The point is that if you allow the equation of state parameter w to vary from the value of w=-1 that it has in the standard cosmology then you get a better fit. However, it is one of the features of Bayesian inference that if you introduce a new free parameter then you have to assign a prior probability over the space of values that parameter could hold. That prior penalty is carried through to the posterior probability. Unless the new model fits observational data significantly better than the old one, this prior penalty will lead to the new model being disfavoured. This is the Bayesian statement of Ockham’s Razor.

The Vagnozzi paper represents a statement of this in the context of the Hubble tension. If a new floating parameter w is introduced the data prefer a value less than -1 (as demonstrated in the figure) but on posterior probability grounds the resulting model is less probable than the standard cosmology for the reason stated above. Vagnozzi then argues that if a new fixed value of, say, w = -1.3 is introduced then the resulting model is not penalized by having to spread the prior probability out over a range of values but puts all its prior eggs in one basket labelled w = -1.3.

This is of course true. The problem is that the value of w = -1.3 does not derive from any ab initio principle of physics but by a posteriori of the inference described above. It’s no surprise that you can get a better answer if you know what outcome you want. I find that I am very good at forecasting the football results if I make my predictions after watching Final Score

Indeed, many cosmologists think any value of w < -1 should be ruled out ab initio because they don’t make physical sense anyway.

 

 

 

Hubble’s Constant – A Postscript on w

Posted in The Universe and Stuff with tags , , , , , , , on July 15, 2019 by telescoper

Last week I posted about new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. You can download a PDF of the full paper here.

Reading the paper through over the weekend I was struck by Figure 6:

This shows the constraints on H0 and the parameter w which is used to describe the dark energy component. Bear in mind that these estimates of cosmological parameters actually involve the simultaneous estimation of several parameters, six in the case of the standard ΛCDM model. Incidentally, H0 is not one of the six basic parameters of the standard model – it is derived from the others – and some important cosmological observations are relatively insensitive to its value.

The parameter w is the equation of state parameter for the dark energy component so that the pressure p is related to the energy density ρc2 via p=wρc2. The fixed value w=-1 applies if the dark energy is of the form of a cosmological constant (or vacuum energy). I explained why here. Non-relativistic matter (dominated by rest-mass energy) has w=0 while ultra-relativistic matter has w=1/3.

Applying the cosmological version of the thermodynamic relation for adiabatic expansion  “dE=-pdV” one finds that ρ ∼ a-3(1+w) where a is the cosmic scale factor. Note that w=-1 gives a constant energy density as the Universe expands (the cosmological constant); w=0 gives ρ ∼ a-3, as expected for `ordinary’ matter.

As I already mentioned, in the standard cosmological model w is fixed at  w=-1 but if it is treated as a free parameter then it can be added to the usual six to produce the Figure shown above. I should add for Bayesians that this plot shows the posterior probability assuming a uniform prior on w.

What is striking is that the data seem to prefer a very low value of w. Indeed the peak of the likelihood (which determines the peak of the posterior probability if the prior is flat) appears to be off the bottom of the plot. It must be said that the size of the black contour lines (at one sigma and two sigma for dashed and solid lines respectively) suggests that these data aren’t really very informative; the case w=-1 is well within the 2σ contour. In other words, one might get a slightly better fit by allowing the equation of state parameter to float, but the quality of the fit might not improve sufficiently to justify the introduction of another parameter.

Nevertheless it is worth mentioning that if it did turn out, for example, that w=-2 that would imply ρ ∼ a+3, i.e. an energy density that increases steeply as a increases (i.e. as the Universe expands). That would be pretty wild!

On the other hand, there isn’t really any physical justification for cases with w<-1 (in terms of a plausible model) which, in turn, makes me doubt the reasonableness of imposing a flat prior. My own opinion is that if dark energy turns out not to be of the simple form of a cosmological constant then it is likely to be too complicated to be expressed in terms of a single number anyway.

 

Postscript to this postscript: take a look at this paper from 2002!

Dark Energy – Lectures by Varun Sahni

Posted in The Universe and Stuff with tags , , on June 9, 2019 by telescoper

I thought I’d share this lecture course about Dark Energy here. It was delivered by Varun Sahni at an international school on cosmology earlier this year. The material is quite technical in places but I’m sure these lectures will prove a very helpful introduction to, for example, new PhD students in this area. Varun has been a very good friend and colleague of mine for many years, and he is an excellent lecturer!

Here are the three lectures:

The Negative Mass Bug

Posted in Astrohype, Open Access, The Universe and Stuff with tags , , , , , on February 25, 2019 by telescoper

You may have noticed that some time ago I posted about  a paper by Jamie Farnes published in Astronomy & Astrophysics but available on the arXiv here which entails a suggestion that material with negative mass might account for dark energy and/or dark matter.

Here is the abstract of said paper:

Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified ΛCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.

Well there’s a new paper just out on the arXiv by Hector Socas-Navarro with the abstract

A recent work by Farnes (2018) proposed an alternative cosmological model in which both dark matter and dark energy are replaced with a single fluid of negative mass. This paper presents a critical review of that model. A number of problems and discrepancies with observations are identified. For instance, the predicted shape and density of galactic dark matter halos are incorrect. Also, halos would need to be less massive than the baryonic component or they would become gravitationally unstable. Perhaps the most challenging problem in this theory is the presence of a large-scale version of the `runaway’ effect, which would result in all galaxies moving in random directions at nearly the speed of light. Other more general issues regarding negative mass in general relativity are discussed, such as the possibility of time-travel paradoxes.

Among other things there is this:

After initially struggling to reproduce the F18 results, a careful inspection of his source code revealed a subtle bug in the computation of the gravitational acceleration. Unfortunately, the simulations in F18 are seriously compromised by this coding error whose effect is that the gravitational force decreases with the inverse of the distance, instead of the distance squared.

Oh dear.

I don’t think I need go any further into this particular case, which would just rub salt into the wounds of Farnes (2018) but I will make a general comment. Peer review is the best form of quality stamp that we have but, as this case demonstrates, it is by no means flawless. The paper by Farnes (2018) was refereed and published, but is now shown to be wrong*. Just as authors can make mistakes so can referees. I know I’ve screwed up as a referee in the past so I’m not claiming to be better than anyone in saying this.

*This claim is contested: see the comment below.

I don’t think the lesson is that we should just scrap peer review, but I do think we need to be more imaginative about how it is used than just relying on one or two individuals to do it. This case shows that science eventually works, as the error was found and corrected, but that was only possible because the code used by Farnes (2018) was made available for scrutiny. This is not always what happens. I take this as a vindication of open science, and an example of why scientists should share their code and data to enable others to check the results. I’d like to see a system in which papers are not regarded as `final’ documents but things which can be continuously modified in response to independent scrutiny, but that would require a major upheaval in academic practice and is unlikely to happen any time soon.

In this case, in the time since publication there has been a large amount of hype about the Farnes (2018) paper, and it’s unlikely that any of the media who carried stories about the results therein will ever publish retractions. This episode does therefore illustrate the potentially damaging effect on public trust that the excessive thirst for publicity can have. So how do we balance open science against the likelihood that wrong results will be taken up by the media before the errors are found? I wish I knew!

The Future Circular Collider: what’s the MacGuffin?

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , on February 7, 2019 by telescoper

I’ve been reading a few items here and there about proposals for a Future Circular Collider, even larger than the Large Hadron Collider (and consequently even more expensive). No doubt particle physicists interested in accelerator experiments will be convinced this is the right move, but of course there are other projects competing for funds and it’s by no means certain that the FCC will actually happen.

One of the important things about `Big Science’ when it gets this big is that it has to capture the imagination of people with political influence if it is to be granted funding. Based on past experience that means that there has to be a Big Discovery to be made or a Big Idea to be tested. This Big Thing has to be simple enough for politicians to understand and exciting enough to capture their imagination (and that of the public). In the case of the Large Hadron Collider (LHC), for example, this was the Higgs Boson. In the case of the Euclid space mission, the motivation is Dark Energy.

The Big Thing that sells a project to politicians is not necessarily the thing that most scientists are interested in. The LHC has done a lot of things other than discover the Higgs, and Euclid will do many things other than probe Dark Energy, but there has to be one thing to set it all in motion. It seems to me that the Big Question about the FCC is whether there is something specific that can motivate this project in the way the Higgs did for the LHC? If so, what is it?

Answers on a postcard or, better, through the comments box below.

 

Humphrey Bogart with the eponymous Maltese Falcon

Anyway, these thoughts reminded me of the concept of a  MacGuffin. Unpick the plot of any thriller or suspense movie and the chances are that somewhere within it you will find lurking at least one MacGuffin. This might be a tangible thing, such the eponymous sculpture of a Falcon in the archetypal noir classic The Maltese Falcon or it may be rather nebulous, like the “top secret plans” in Hitchcock’s The Thirty Nine Steps. Its true character may be never fully revealed, such as in the case of the glowing contents of the briefcase in Pulp Fiction , which is a classic example of the “undisclosed object” type of MacGuffin, or it may be scarily obvious, like a doomsday machine or some other “Big Dumb Object” you might find in a science fiction thriller.

Or the MacGuffin may not be a real thing at all. It could be an event or an idea or even something that doesn’t actually exist in any sense, such the fictitious decoy character George Kaplan in North by Northwest. In fact North by North West is an example of a movie with more than one MacGuffin. Its convoluted plot involves espionage and the smuggling of what is only cursorily described as “government secrets”. These are the main MacGuffin; George Kaplan is a sort of sub-MacGuffin. But although this is behind the whole story, it is the emerging romance, accidental betrayal and frantic rescue involving the lead characters played by Cary Grant and Eve Marie Saint that really engages the characters and the audience as the film gathers pace. The MacGuffin is a trigger, but it soon fades into the background as other factors take over.

Whatever it is or is not, the MacGuffin is responsible for kick-starting the plot. It makes the characters embark upon the course of action they take as the tale begins to unfold. This plot device was particularly beloved by Alfred Hitchcock (who was responsible for introducing the word to the film industry). Hitchcock was however always at pains to ensure that the MacGuffin never played as an important a role in the mind of the audience as it did for the protagonists. As the plot twists and turns – as it usually does in such films – and its own momentum carries the story forward, the importance of the MacGuffin tends to fade, and by the end we have usually often forgotten all about it. Hitchcock’s movies rarely bother to explain their MacGuffin(s) in much detail and they often confuse the issue even further by mixing genuine MacGuffins with mere red herrings.

Here is the man himself explaining the concept at the beginning of this clip. (The rest of the interview is also enjoyable, convering such diverse topics as laxatives, ravens and nudity..)

There’s nothing particular new about the idea of a MacGuffin. I suppose the ultimate example is the Holy Grail in the tales of King Arthur and the Knights of the Round Table in which the Grail itself is basically a peg on which to hang a series of otherwise disconnected stories. It is barely mentioned once each individual story has started and, of course, is never found. That’s often how it goes with MacGuffins -even the Maltese Falcon turned out in the end to be a fake – they’re only really needed to start things off.

So let me rephrase the question I posed earlier on. In the case of the Future Circular Collider, what’s the MacGuffin?