## A Dark Energy Mission

Posted in The Universe and Stuff with tags , , on November 16, 2013 by telescoper

Here’s a challenge for cosmologists and aspiring science communicators out there. Most of you will know the standard cosmological model involves a thing, called Dark Energy, whose existence is inferred from observations that suggest that the expansion of the Universe appears to be accelerating.

That these observations require something a bit weird can be quickly seen by looking at the equation that governs the dynamics of the cosmic scale factor $R$ for a simple model involving matter in the form of a perfect fluid:

$\ddot{R}=-\frac{4\pi G}{3} \left( \rho + \frac{3p}{c^2}\right) R$

The terms in brackets relate to the density and pressure of the fluid, respectively. If the pressure is negligible (as is the case for “dust”), then the expansion is always decelerating because the density of matter is always positive quantity; we don’t know of anything that has a negative mass.

The only way to make the expansion of such a universe actually accelerate is to fill it with some sort of stuff that has

$\left( \rho + \frac{3p}{c^2} \right) < 0.$

In the lingo this means that the strong energy condition must be violated; this is what the hypothetical dark energy component is introduced to do. Note that this requires the dark energy to exert negative pressure, ie it has to be, in some sense, in tension.

However, there’s something about this that seems very paradoxical. Pressure generates a force that pushes, tension corresponds to a force that pulls. In the cosmological setting, though, increasing positive pressure causes a greater deceleration while to make the universe accelerate requires tension. Why should a bigger pushing force cause the universe to slow down, while a pull causes it to speed up?

The lazy answer is to point at the equation and say “that’s what the mathematics says”, but that’s no use at all when you want to explain this to Joe Public.

Your mission, should you choose to accept it, is to explain in language appropriate to a non-expert, why a pull seems to cause a push…

## Tension in Cosmology?

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , on October 24, 2013 by telescoper

I noticed this abstract (of a paper by Rest et al.) on the arXiv the other day:

We present griz light curves of 146 spectroscopically confirmed Type Ia Supernovae (0.03<z<0.65) discovered during the first 1.5 years of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We have investigated spatial and time variations in the photometry, and we find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the HST Calspec definition of the AB system. We discuss our efforts to minimize the systematic uncertainties in the photometry. A Hubble diagram is constructed with a subset of 112 SNe Ia (out of the 146) that pass our light curve quality cuts. The cosmological fit to 313 SNe Ia (112 PS1 SNe Ia + 201 low-z SNe Ia), using only SNe and assuming a constant dark energy equation of state and flatness, yields w = -1.015^{+0.319}_{-0.201}(Stat)+{0.164}_{-0.122}(Sys). When combined with BAO+CMB(Planck)+H0, the analysis yields \Omega_M = 0.277^{+0.010}_{-0.012} and w = -1.186^{+0.076}_{-0.065} including all identified systematics, as spelled out in the companion paper by Scolnic et al. (2013a). The value of w is inconsistent with the cosmological constant value of -1 at the 2.4 sigma level. This tension has been seen in other high-z SN surveys and endures after removing either the BAO or the H0 constraint. If we include WMAP9 CMB constraints instead of those from Planck, we find w = -1.142^{+0.076}_{-0.087}, which diminishes the discord to <2 sigma. We cannot conclude whether the tension with flat CDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 supernova sample will be 3 times as large as this initial sample, which should provide more conclusive results.

The mysterious Pan-STARRS stands for the Panoramic Survey Telescope and Rapid Response System, a set of telescopes cameras and related computing hardware that monitors the sky from its base in Hawaii. One of the many things this system can do is detect and measure distant supernovae, hence the particular application to cosmology described in the paper. The abstract mentions a preliminary measurement of the parameter w, which for those of you who are not experts in cosmology is usually called the “equation of state” parameter for the dark energy component involved in the standard model. What it describes is the relationship between the pressure P and the energy density ρc2 of this mysterious stuff, via the relation P=wρc2. The particularly interesting case is w=-1 which corresponds to a cosmological constant term; see here for a technical discussion. However, we don’t know how to explain this dark energy from first principles so really w is a parameter that describes our ignorance of what is actually going on. In other words, the cosmological constant provides the simplest model of dark energy but even in that case we don’t know where it comes from so it might well be something different; estimating w from surveys can therefore tell us whether we’re on the right track or not.

The abstract explains that, within the errors, the Pan-STARRS data on their own are consistent with w=-1. More interestingly, though, combining the supernovae observations with others, the best-fit value of w shifts towards a value a bit less than -1 (although still with quite a large uncertainty). Incidentally  value of w less than -1 is generally described as a “phantom” dark energy component. I’ve never really understood why…

So far estimates of cosmological parameters from different data sets have broadly agreed with each other, hence the application of the word “concordance” to the standard cosmological model.  However, it does seem to be the case that supernova measurements do generally seem to push cosmological parameter estimates away from the comfort zone established by other types of observation. Could this apparent discordance be signalling that our ideas are wrong?

That’s the line pursued by a Scientific American article on this paper entitled “Leading Dark Energy Theory Incompatible with New Measurement”. This could be true, but I think it’s a bit early to be taking this line when there are still questions to be answered about the photometric accuracy of the Pan-Starrs survey. The headline I would have picked would be more like “New Measurement (Possibly) Incompatible With Other Measurements of Dark Energy”.

But that would have been boring…

## Updates for Cosmology: A Very Short Introduction?

Posted in Books, Talks and Reviews, The Universe and Stuff with tags , , , , , on October 21, 2013 by telescoper

Yet another very busy day, travelling in the morning and then in meetings all afternoon, so just time for another brief post. I thought I’d take the opportunity to do a little bit of crowdsourcing…

A few days ago I was contacted by Oxford University Press who are apparently considering the possibility of a second edition of my little book Cosmology: A Very Short Introduction, which is part of an extensive series of intensive books on all kinds of subjects.

I really enjoyed writing this book, despite the tough challenge of trying to cover the whole of cosmology in less than 35,000 words and was very pleased with the way it turned out. It has sold over 25000 copies in English and has been published in several other languages.

It is meant to be accessible to the interested layperson but the constraints imposed by the format mean it goes fairly quickly through some quite difficult concepts. Judging by the reviews, though, most people seem to think it gives a useful introduction to the subject, although you can’t please all of the people all of the time!

However, the book was published way back in 2001 and, well, one or two things have happened in the field of cosmology since then.  I have in fact had a number of emails from people asking whether there was going to be a new edition to include the latest developments, but the book is part of a very large series and it was basically up to the publisher to decide whether it wanted to update some, all or none of the series.

Now it seems the powers that be at OUP have decided to explore the possibility further and have asked me to make a pitch for a new edition.  I have some ideas of things that would have to be revised – the section on Dark Energy definitely needs to be updated, and of course first WMAP and then Planck have refined our view of the cosmic microwave background pretty comprehensively?

Anyway, I thought it would be fun to ask people out there who have read it, or even those who haven’t, what they feel I should change for a new edition if there is to be one. That might include new topics or revisions of things that could be improved. Your comments are therefore invited via the famous Comments Box. Please bear in mind that any new edition will be also constrained to be no more than 35,000 words.

Oh, and if you haven’t seen the First Edition at all, why not rush out and buy a copy before it’s too late? I understand you can snap up a copy for just £3 while stocks last. I can assure you all the royalties will go to an excellent cause. Me.

## Science, Religion and Henry Gee

Posted in Bad Statistics, Books, Talks and Reviews, Science Politics, The Universe and Stuff with tags , , , , , , , , , on September 23, 2013 by telescoper

Last week a piece appeared on the Grauniad website by Henry Gee who is a Senior Editor at the magazine Nature.  I was prepared to get a bit snarky about the article when I saw the title, as it reminded me of an old  rant about science being just a kind of religion by Simon Jenkins that got me quite annoyed a few years ago. Henry Gee’s article, however, is actually rather more coherent than that and  not really deserving of some of the invective being flung at it.

For example, here’s an excerpt that I almost agree with:

One thing that never gets emphasised enough in science, or in schools, or anywhere else, is that no matter how fancy-schmancy your statistical technique, the output is always a probability level (a P-value), the “significance” of which is left for you to judge – based on nothing more concrete or substantive than a feeling, based on the imponderables of personal or shared experience. Statistics, and therefore science, can only advise on probability – they cannot determine The Truth. And Truth, with a capital T, is forever just beyond one’s grasp.

I’ve made the point on this blog many times that, although statistical reasoning lies at the heart of the scientific method, we don’t do anywhere near enough  to teach students how to use probability properly; nor do scientists do enough to explain the uncertainties in their results to decision makers and the general public.  I also agree with the concluding thought, that science isn’t about absolute truths. Unfortunately, Gee undermines his credibility by equating statistical reasoning with p-values which, in my opinion, are a frequentist aberration that contributes greatly to the public misunderstanding of science. Worse, he even gets the wrong statistics wrong…

But the main thing that bothers me about Gee’s article is that he blames scientists for promulgating the myth of “science-as-religion”. I don’t think that’s fair at all. Most scientists I know are perfectly well aware of the limitations of what they do. It’s really the media that want to portray everything in simple black and white terms. Some scientists play along, of course, as I comment upon below, but most of us are not priests but pragmatatists.

Anyway, this episode gives me the excuse to point out  that I ended a book I wrote in 1998 with a discussion of the image of science as a kind of priesthood which it seems apt to repeat here. The book was about the famous eclipse expedition of 1919 that provided some degree of experimental confirmation of Einstein’s general theory of relativity and which I blogged about at some length last year, on its 90th anniversary.

I decided to post the last few paragraphs here to show that I do think there is a valuable point to be made out of the scientist-as-priest idea. It’s to do with the responsibility scientists have to be honest about the limitations of their research and the uncertainties that surround any new discovery. Science has done great things for humanity, but it is fallible. Too many scientists are too certain about things that are far from proven. This can be damaging to science itself, as well as to the public perception of it. Bandwagons proliferate, stifling original ideas and leading to the construction of self-serving cartels. This is a fertile environment for conspiracy theories to flourish.

To my mind the thing  that really separates science from religion is that science is an investigative process, not a collection of truths. Each answer simply opens up more questions.  The public tends to see science as a collection of “facts” rather than a process of investigation. The scientific method has taught us a great deal about the way our Universe works, not through the exercise of blind faith but through the painstaking interplay of theory, experiment and observation.

This is what I wrote in 1998:

Science does not deal with ‘rights’ and ‘wrongs’. It deals instead with descriptions of reality that are either ‘useful’ or ‘not useful’. Newton’s theory of gravity was not shown to be ‘wrong’ by the eclipse expedition. It was merely shown that there were some phenomena it could not describe, and for which a more sophisticated theory was required. But Newton’s theory still yields perfectly reliable predictions in many situations, including, for example, the timing of total solar eclipses. When a theory is shown to be useful in a wide range of situations, it becomes part of our standard model of the world. But this doesn’t make it true, because we will never know whether future experiments may supersede it. It may well be the case that physical situations will be found where general relativity is supplanted by another theory of gravity. Indeed, physicists already know that Einstein’s theory breaks down when matter is so dense that quantum effects become important. Einstein himself realised that this would probably happen to his theory.

Putting together the material for this book, I was struck by the many parallels between the events of 1919 and coverage of similar topics in the newspapers of 1999. One of the hot topics for the media in January 1999, for example, has been the discovery by an international team of astronomers that distant exploding stars called supernovae are much fainter than had been predicted. To cut a long story short, this means that these objects are thought to be much further away than expected. The inference then is that not only is the Universe expanding, but it is doing so at a faster and faster rate as time passes. In other words, the Universe is accelerating. The only way that modern theories can account for this acceleration is to suggest that there is an additional source of energy pervading the very vacuum of space. These observations therefore hold profound implications for fundamental physics.

As always seems to be the case, the press present these observations as bald facts. As an astrophysicist, I know very well that they are far from unchallenged by the astronomical community. Lively debates about these results occur regularly at scientific meetings, and their status is far from established. In fact, only a year or two ago, precisely the same team was arguing for exactly the opposite conclusion based on their earlier data. But the media don’t seem to like representing science the way it actually is, as an arena in which ideas are vigorously debated and each result is presented with caveats and careful analysis of possible error. They prefer instead to portray scientists as priests, laying down the law without equivocation. The more esoteric the theory, the further it is beyond the grasp of the non-specialist, the more exalted is the priest. It is not that the public want to know – they want not to know but to believe.

Things seem to have been the same in 1919. Although the results from Sobral and Principe had then not received independent confirmation from other experiments, just as the new supernova experiments have not, they were still presented to the public at large as being definitive proof of something very profound. That the eclipse measurements later received confirmation is not the point. This kind of reporting can elevate scientists, at least temporarily, to the priesthood, but does nothing to bridge the ever-widening gap between what scientists do and what the public think they do.

As we enter a new Millennium, science continues to expand into areas still further beyond the comprehension of the general public. Particle physicists want to understand the structure of matter on tinier and tinier scales of length and time. Astronomers want to know how stars, galaxies  and life itself came into being. But not only is the theoretical ambition of science getting bigger. Experimental tests of modern particle theories require methods capable of probing objects a tiny fraction of the size of the nucleus of an atom. With devices such as the Hubble Space Telescope, astronomers can gather light that comes from sources so distant that it has taken most of the age of the Universe to reach us from them. But extending these experimental methods still further will require yet more money to be spent. At the same time that science reaches further and further beyond the general public, the more it relies on their taxes.

Many modern scientists themselves play a dangerous game with the truth, pushing their results one-sidedly into the media as part of the cut-throat battle for a share of scarce research funding. There may be short-term rewards, in grants and TV appearances, but in the long run the impact on the relationship between science and society can only be bad. The public responded to Einstein with unqualified admiration, but Big Science later gave the world nuclear weapons. The distorted image of scientist-as-priest is likely to lead only to alienation and further loss of public respect. Science is not a religion, and should not pretend to be one.

PS. You will note that I was voicing doubts about the interpretation of the early results from supernovae  in 1998 that suggested the universe might be accelerating and that dark energy might be the reason for its behaviour. Although more evidence supporting this interpretation has since emerged from WMAP and other sources, I remain sceptical that we cosmologists are on the right track about this. Don’t get me wrong – I think the standard cosmological model is the best working hypothesis we have _ I just think we’re probably missing some important pieces of the puzzle. I don’t apologise for that. I think sceptical is what a scientist should be.

## Mingus – Oh Yeah!

Posted in Jazz, The Universe and Stuff with tags , , , , , on January 10, 2013 by telescoper

I noticed a news item this morning which explains that the Supernova Cosmology Project have found a supernova with a redshift of 1.71, which makes it the most distant one found so far  (about 10 billion light-years away).  That – and hopefully others at similar distances – should prove immensely useful  for working out how the expansion rate of the Universe has changed over its history and hence yield important clues about the nature of its contents, particularly the mysterious dark energy.

Of particular relevance to this blog is the name given to this supernova, Mingus, after the jazz musician and composer Charles Mingus. Both the discovery and the great choice of name are grounds for celebration, so here’s one of my favourite Mingus tracks – the delightfully carefree and exuberant Eat that Chicken, from the Album Oh Yeah. Enjoy!

## A Little Bit of Gravitational Lensing

Posted in The Universe and Stuff with tags , , , , , on December 30, 2012 by telescoper

I thought I’d take a short break from doing absolutely nothing to post a quick little item about gravitational lensing. It’s been in my mind to say something about this since I mentioned it in one of the lectures I gave just before Christmas, but I’ve been too busy (actually too disorganized) to do it until now. It’s all based on a paper posted to the arXiv in December which was led by Jo Woodward (née Short) who did her PhD with me in Cardiff and is now in a postdoctoral research position in Durham (which is in the Midlands). The following pictures were take from her paper.

This figure shows the geometry of a gravitational lens system: light from the source S is deflected by the gravitational potential of the lens L so that an image I appears at a position on the sky which is different from the actual position when viewed by the observer O:

There’s a critical radius (which depends on the mass and density profile of the lens) at which this can lead to the formation of multiple images of the source. Even if multiple images are not resolved, lensing results in an increase in the apparent brightness of the source.

A great deal of cosmological information can be gleaned statistically from lensing  with even limited knowledge of the properties of the source and lens populations and with incomplete information about e.g. the actual angular deflection produced by the lens or the lens mass. To illustrate this, just consider the expression for the differential optical depth to lensing (related to the probability that a source at redshift $z_s$ is lensed by an object at redshift $z_l$

The first two terms are cosmological, accounting geometrical and expansion effects. Roughly speaking, the larger the volume out to a given redshift the higher the probability is that a given source will be lensed. The third term involves the mass function of lens systems. In the framework of the standard cosmological model this can be computed using Press-Schechter theory or one of the variations thereof. According to current understanding, cosmological structures (i.e. galaxies and clusters of galaxies) form hierarchically so this mass function changes with redshift, with fewer high mass objects present at high redshift than at low redshift, as represented in this picture, in which masses are given in units of solar masses, the colour-coding representing different redshifts:

The last term represents the lensing cross-section of an object with a given mass. This depends on the internal structure of the lens – an object in which the mass is highly concentrated produces  lensing effects radically different from one that isn’t. Two simple models for the mass distribution are the singular isothermal sphere (SIS) and the Navarro-Frenk-White profile (NFW). The latter is thought (by some) to represent the distribution of cold dark matter in haloes around galaxies and clusters which is more diffuse than that of the baryonic material because it can’t dissipate energy which it needs to do to fall into the centre of the object. The real potential of a galaxy in its central regions could be more like the SIS profile would predict, however, because baryons outweigh dark matter there.

Now time for a bit of historical reminiscence. In 1997 I published a book with George Ellis in which we analysed the evidence available at the time relating to the density of matter in the Universe. It was a little bit controversial at the time, but it turns out we were correct in concluding that the density of matter was well below the level favoured by most theorists i.e. only about 20-30% of the critical density. However we did not find any compelling evidence at that time for a cosmological constant (or, if you prefer, dark energy). Indeed one of the strongest upper limits on the cosmological constant came from gravitational lensing measurements, or rather the dearth of them.

The reason for this negative conclusion was that, for a fixed value of the Hubble constant,  in the presence of a cosmological constant the volume out to a given redshift is much larger than if there is no cosmological constant. That means the above integral predicts a high probability for lensing. Surveys however failed to turn up large numbers of strongly-lensed objects, hence the inference that the universe could not be dominated by a cosmological constant. This is, of course, assuming that the other terms in the integral are well understood and that the reason significant numbers of lensed systems weren’t found wasn’t just they are tricky to identify…

Meanwhile, huge advances were made in other aspects of observational cosmology that established a standard cosmological model in which the cosmological constant makes up almost 75% of the energy budget of the Universe.

Now, 15 years later on, enter the Herschel Space Observatory, which turns out to be superb at identifying gravitational lenses.  I posted about this here, in fact. Working in the far-infrared makes it impossible to resolve multiple images with Herschel – even with a 3.5m mirror in space, λ/D isn’t great for wavelengths of 500 microns! However, the vast majority of sources found during the Herschel ATLAS survey with large fluxes at this wavelengths can be identified as lenses simply because their brightness tells us they’ve probably been magnified by a lens. Candidates can then be followed up with other telescopes on the ground.  A quick look during the Science Demonstration Phase of Herschel produced the first crop of firmly identified gravitational lens systems published in Science by Negrello et al..  When the full data set has been analysed there should be hundreds of such systems, which will revolutionize this field.

To see the potential (no pun intended) of this kind of measurement, take a look at these five systems from the SDP set:

These systems have measured (or estimated) source and lens redshifts. What is plotted is the conditional probability of a lens at some particular lens redshift, given the source redshift and the fact that strong lensing has occurred. Curves are given for SIS and NFW lens profiles and everything else is calculated according to the standard cosmological model. The green bars represent the measured lens redshifts.  It’s early days, so there are only five systems, but you can already see that they are pointing towards low lens redshifts, favouring NFW over SIS;  the yellow and light blue shading represents regions in which 68% of the likelihood lies.  These data don’t strongly prefer one model over the other, but with hundreds more, and extra information about at least some of the lens systems (such as detailed determinations of the lens mass from deflections etc) we should be able  to form more definite conclusions.

Unfortunately the proposal I submitted to STFC to develop a more detailed theoretical model and statistical analysis pipeline (Bayesian, of course) wasn’t funded. C’est la vie. That probably just means that someone smarter and quicker than me will do the necessary…

## Skepsis Revived

Posted in Politics, The Universe and Stuff with tags , , , , , , , , on November 14, 2012 by telescoper

I appear to be in recycling mode this week, so I thought I’d carry on with a rehash of an old post about skepticism.  The excuse for this was an item in one of the Guardian science blogs about the distinction between Skeptic and sceptic. I must say I always thought they were simply alternative spellings, the “k” being closer to the original Greek and “c” being Latinised (via French). The Oxford English dictionary merely states that “sceptic” is more widespread in the UK and Commonwealth whereas “skeptic” prevails in North America. Somehow, however, this distinction has morphed into one variant meaning a person who has a questioning attitude to or is simply unconvinced by what claims to be knowledge in a particular area, and another meaning a “denier”, the latter being an “anti-sceptic” who believes wholeheartedly and often without evidence in whatever is contrary to received wisdom. A scientists should, I think, be the former, but the latter represents a distinctly unscientific attitude.

Anyway, yesterday I blogged a little bit about dark energy as, according to the standard model, this accounts for about 75% of the energy budget of the Universe. It’s also something we don’t understand very well at all. To make a point, take a look at the following picture (credit to the High-z supernova search team).

What is plotted is the redshift of each supernova (along the x-axis), which relates to the factor by which the universe has expanded since light set out from it. A redshift of 0.5 means the universe was compressed by a factor 1.5 in all dimensions at the time when that particular supernova went bang. The y-axis shows the really hard bit to get right. It’s the estimated distance (in terms of distance modulus) of the supernovae. In effect, this is a measure of how faint the sources are. The theoretical curves show the faintness expected of a standard source observed at a given redshift in various cosmological models. The bottom panel shows these plotted with a reference curve taken out so the trend is easier to see. Actually, this is quite an old plot and there are many more points now but this is the version that convinced most cosmologists when it came out about a decade ago, which is why I show it here.

The argument drawn from these data is that the high redshift supernovae are fainter than one would expect in models without dark energy (represented by the $\Omega_{\Lambda}$  in the diagram. If this is true then it means the luminosity distance of these sources is greater than it would be in a decelerating universe. Their observed properties can be accounted for, however, if the universe’s expansion rate has been accelerating since light set out from the supernovae. In the bog standard cosmological models we all like to work with, acceleration requires that $\rho + 3p/c^2$ be negative. The “vacuum” equation of state $p=-\rho c^2$ provides a simple way of achieving this but there are many other forms of energy that could do it also, and we don’t know which one is present or why…

This plot contains the principal evidence that has led to most cosmologists accepting that the Universe is accelerating.  However, when I show it to first-year undergraduates (or even to members of the public at popular talks), they tend to stare in disbelief. The errors are huge, they say, and there are so  few data points. It just doesn’t look all that convincing. Moreover, there are other possible explanations. Maybe supernovae were different beasties back when the universe was young. Maybe something has absorbed their light making them look fainter rather than being further away. Maybe we’ve got the cosmological models wrong.

The reason I have shown this diagram is precisely because it isn’t superficially convincing. When they see it, students probably form the opinion that all cosmologists are gullible idiots. I’m actually pleased by that.  In fact, it’s the responsibility of scientists to be skeptical about new discoveries. However, it’s not good enough just to say “it’s not convincing so I think it’s rubbish”. What you have to do is test it, combine it with other evidence, seek alternative explanations and test those. In short you subject it to rigorous scrutiny and debate. It’s called the scientific method.

Some of my colleagues express doubts about me talking as I do about dark energy in first-year lectures when the students haven’t learned general relativity. But I stick to my guns. Too many people think science has to be taught as great stacks of received wisdom, of theories that are unquestionably “right”. Frontier sciences such as cosmology give us the chance to demonstrate the process by which we find out about the answers to big questions, not by believing everything we’re told but by questioning it.

My attitude to dark energy is that, given our limited understanding of the constituents of the universe and the laws of matter, it’s the best explanation we have of what’s going on. There is corroborating evidence of missing energy, from the cosmic microwave background and measurements of galaxy clustering, so it does have explanatory power. I’d say it was quite reasonable to believe in dark energy on the basis of what we know (or think we know) about the Universe.  In other words, as a good Bayesian, I’d say it was the most probable explanation. However, just because it’s the best explanation we have now doesn’t mean it’s a fact. It’s a credible hypothesis that deserves further work, but I wouldn’t bet much against it turning out to be wrong when we learn more.

I have to say that too many cosmologists seem to accept the reality of dark energy  with the unquestioning fervour of a religious zealot.  Influential gurus have turned the dark energy business into an industrial-sized bandwagon that sometimes makes it difficult, especially for younger scientists, to develop independent theories. On the other hand, it is clearly a question of fundamental importance to physics, so I’m not arguing that such projects should be axed. I just wish the culture of skepticism ran a little deeper.

Another context in which the word “skeptic” crops up frequently nowadays is  in connection with climate change although it has come to mean “denier” rather than “doubter”. I’m not an expert on climate change, so I’m not going to pretend that I understand all the details. However, there is an interesting point to be made in comparing climate change with cosmology. To make the point, here’s another figure.

There’s obviously a lot of noise and it’s only the relatively few points at the far right that show a clear increase (just as in the first Figure, in fact). However, looking at the graph I’d say that, assuming the historical data points are accurate,  it looks very convincing that the global mean temperature is rising with alarming rapidity. Modelling the Earth’s climate is very difficult and we have to leave it to the experts to assess the effects of human activity on this curve. There is a strong consensus from scientific experts, as monitored by the Intergovernmental Panel on Climate Change, that it is “very likely” that the increasing temperatures are due to increased atmospheric concentrations of greenhouse gas emissions.

There is, of course, a bandwagon effect going on in the field of climatology, just as there is in cosmology. This tends to stifle debate, make things difficult for dissenting views to be heard and evaluated rationally,  and generally hinders the proper progress of science. It also leads to accusations of – and no doubt temptations leading to – fiddling of the data to fit the prevailing paradigm. In both fields, though, the general consensus has been established by an honest and rational evaluation of data and theory.

I would say that any scientist worthy of the name should be skeptical about the human-based interpretation of these data and that, as in cosmology (or any scientific discipline), alternative theories should be developed and additional measurements made. However, this situation in climatology is very different to cosmology in one important respect. The Universe will still be here in 100 years time. We might not.

The big issue relating to climate change is not just whether we understand what’s going on in the Earth’s atmosphere, it’s the risk to our civilisation of not doing anything about it. This is a great example where the probability of being right isn’t the sole factor in making a decision. Sure, there’s a chance that humans aren’t responsible for global warming. But if we carry on as we are for decades until we prove conclusively that we are, then it will be too late. The penalty for being wrong will be unbearable. On the other hand, if we tackle climate change by adopting greener technologies, burning less fossil fuels, wasting less energy and so on, these changes may cost us a bit of money in the short term but  frankly we’ll be better off anyway whether we did it for the right reasons or not. Of course those whose personal livelihoods depend on the status quo are the ones who challenge the scientific consensus most vociferously. They would, wouldn’t they?

This is a good example of a decision that can be made on the basis of a  judgement of the probability of being right. In that respect , the issue of how likely it is that the scientists are correct on this one is almost irrelevant. Even if you’re a complete disbeliever in science you should know  how to respond to this issue, following the logic of Blaise Pascal. He argued that there’s no rational argument for the existence or non-existence of God but that the consequences of not believing if God does exist (eternal damnation) were much worse than those of behaving as if you believe in God when he doesn’t. For “God” read “climate change” and let Pascal’s wager be your guide….

## A Dark Expletive

Posted in Poetry, The Universe and Stuff with tags , , , , , , , on November 13, 2012 by telescoper

A news item today about BOSS (yet another observational cosmology survey) gives me an excuse to recycle an idea from an old post.

The phrase expletive deleted was made popular at the time of Watergate after the release of the expurgated tapes made by Richard Nixon in the Oval Office when he was President of the United States of America. These showed that, as well as been a complete crook, he was practically unable to speak a single sentence without including a swear word.

Nowadays the word expletive is generally taken to mean an oath or exclamation, particularly if it is obscene, but that’s not quite what it really means. Derived from the latin verb explere (“to fill out”) from which the past participle is expletus, the meaning of the word in the context of English grammar is  “something added to a phrase or sentence that isn’t strictly needed for the grammatical sense”.  An expletive is added either to fill a syntactical role or, in a poem, simply to make a line fit some metrical rule.

Examples of the former can be found in constructions like “It takes two to Tango” or “There is a lot of crime in Nottingham”; neither  ”it” nor “there” should really be needed but English just seems to like to have something before the verb.

The second kind of use is illustrated wonderfully by Alexander Pope in his Essay on Criticism, which is a kind of guide to what to avoid in writing poetry. It’s a tour de force for its perceptiveness and humour. The following excerpt is pricelessly apt

These equal syllables alone require,
Tho’ oft the open vowels tire;
While expletives their feeble aid do join;
And ten low words oft creep in one dull line

Here the expletive is “do”,  and it is cleverly incorporated in the line talking about expletives, adding  the syllable needed to fit with a strict pentameter. Apparently, poets often used this construction before Pope attacked it but it quickly fell from favour afterwards.

His other prosodic targets are the “open vowels” which means initial vowels that produce an ugly glottal sound, such as in “oft” (especially ugly when following “Tho”). The last line is brilliant too, showing how using only monosyllabic “low” words makes for a line that plods along tediously just like it says.

It’s amazing how much Pope managed to fit into this poem, given the restrictions imposed by the closed couplet structure he adopted. Each idea is compressed into a unit of twenty syllables, two lines of ten syllables with a rhyme at the end of each. This is such an impressive exercise in word-play that it reminds me a lot of the skill showed by the best cryptic crossword setters. Needless to say I’m no more successful at writing poetry than I am at setting crossword clues.

Anyway, what’s all this got to do with cosmology?

Well, I was reminded of it when I attended the 2012 Gerald Whitrow Lecture by Andrew Liddle last Friday at the Royal Astronomical Society, during which he talked, amongst other things, about Dark Energy.

The Dark Energy is an ingredient added to the standard model of cosmology to reconcile  observations of a flat Universe with a matter density that seems too low to account for it.

Other than that it makes the  cosmological metric work out satisfactorily (geddit?), we don’t understand what Dark Energy really is  or why there is as much of it. Indeed, many of us would rather it wasn’t there at all, because we think the resulting model is inelegant or even ugly, and are trying to think of other cosmological models that do not require  its introduction.

In other words, Dark Energy is an expletive (though not one that’s been deleted).

Incidentally, one of the things Andrew said in his talk – and I agree with him 100% – is that in some sense we already know enough about dark energy from observations that we know we don’t understand it at all from a theoretical point of view. Bigger and better surveys, such as Euclid, producing more and more data will characterize its properties with greater statistical accuracy, but they won’t on their own solve the Dark Energy puzzle. For that we need better theoretical understanding.

My own view is that the problem of the vacuum energy is of the same character as the ultraviolet catastrophe that ushered in the era of quantum physics: a big problem that needs a big solution. What I mean by that is that it’s not something that can be resolved by tinkering with the existing theoretical framework. Something much more radical is needed.

## ESA Endorses Euclid

Posted in Science Politics, The Universe and Stuff with tags , , , , , , on June 20, 2012 by telescoper

I’m banned from my office for part of this morning because the PHYSX elves are doing mandatory safety testing of all my electrical whatnots. Hence, I’m staying at home, sitting in the garden, writing this little blog post about a bit of news I found on Twitter earlier.

Apparently the European Space Agency, or rather the Science Programme Committee thereof, has given the green light to a space mission called Euclid whose aim is to “map the geometry of the dark Universe”, i.e. mainly to study dark energy. Euclid is an M-class mission, pencilled in for launch in around 2019, and it is basically the result of a merger between two earlier proposals, the Dark Universe Explorer (DUNE, intended to measure effects of weak gravitational lensing) and the Spectroscopic All Sky Cosmic Explorer (SPACE, to measure wiggles in the galaxy power spectrum known as baryon acoustic oscillations); Euclid will do both of these.

Although I’m not directly involved, as a cosmologist I’m naturally very happy to see this mission finally given approval. To be honest, I am a bit sceptical about how much light Euclid will actually shed on the nature of dark energy, as I think the real issue is a theoretical not an observational one. It will probably end up simply measuring the cosmological constant to a few extra decimal places, which is hardly the issue when the value we try to calculate theoretically is a over a hundred orders of magnitude too large! On the other hand, big projects like this do need their MacGuffin..

The big concern being voiced by my colleagues, both inside and outside the cosmological community, is whether Euclid can actually be delivered within the agreed financial envelope (around 600 million euros). I’m not an expert in the technical issues relevant to this mission, but I’m told by a number of people who are that they are sceptical that the necessary instrumental challenges can be solved without going significantly over-budget. If the cost of Euclid does get inflated, that will have severe budgetary implications for the rest of the ESA science programme; I’m sure we all hope it doesn’t turn into another JWST.

I stand ready to be slapped down by more committed Euclideans for those remarks.

## the new BOSS in town

Posted in The Universe and Stuff with tags , , , on April 1, 2012 by telescoper

I wrote the following post yesterday, but I fell asleep before I could do anything with it. It's about the first set of results from the Baryon Oscillation Spectroscopic Survey (BOSS), part of Sloan Digital Sky Survey-III project, which we announced to the science community and to the press yesterday. How this whole project was picked up by the press in a way I hadn't anticipated is the matter for another post.