## Getting the Measure of Space

Posted in The Universe and Stuff with tags , , , , , , , on October 8, 2014 by telescoper

Astronomy is one of the oldest scientific disciplines. Human beings have certainly been fascinated by goings-on in the night sky since prehistoric times, so perhaps astronomy is evidence that the urge to make sense of the Universe around us, and our own relationship to it, is an essential part of what it means to be human. Part of the motivation for astronomy in more recent times is practical. The regular motions of the stars across the celestial sphere help us to orient ourselves on the Earth’s surface, and to navigate the oceans. But there are deeper reasons too. Our brains seem to be made for problem-solving. We like to ask questions and to try to answer them, even if this leads us into difficult and confusing conceptual territory. And the deepest questions of all concern the Cosmos as a whole. How big is the Universe? What is it made of? How did it begin? How will it end? How can we hope to answer these questions? Do these questions even make sense?

The last century has witnessed a revolution in our understanding of the nature of the Universe of space and time. Huge improvements in the technology of astronomical instrumentation have played a fundamental role in these advances. Light travels extremely quickly (around 300,000 km per second) but we can now see objects so far away that the light we gather from them has taken billions of years to reach our telescopes and detectors. Using such observations we can tell that the Universe was very different in the past from what it looks like in the here and now. In particular, we know that the vast agglomerations of stars known as galaxies are rushing apart from one another; the Universe is expanding. Turning the clock back on this expansion leads us to the conclusion that everything was much denser in the past than it is now, and that there existed a time, before galaxies were born, when all the matter that existed was hotter than the Sun.

This picture of the origin and evolution is what we call the Big Bang, and it is now so firmly established that its name has passed into popular usage. But how did we arrive at this description? Not by observation alone, for observations are nothing without a conceptual framework within which to interpret them, but through a complex interplay between data and theoretical conjectures that has taken us on a journey with many false starts and dead ends and which has only slowly led us to a scheme that makes conceptual sense to our own minds as well as providing a satisfactory fit to the available measurements.

A particularly relevant aspect of this process is the establishment of the scale of astronomical distances. The basic problem here is that even the nearest stars are too remote for us to reach them physically. Indeed most stars can’t even be resolved by a telescope and are thus indistinguishable from points of light. The intensity of light received falls off as the inverse-square of the distance of the source, so if we knew the luminosity of each star we could work out its distance from us by measuring how much light we detect. Unfortunately, however, stars vary considerably in luminosity from one to another. So how can we tell the difference between a dim star that’s relatively nearby and a more luminous object much further away?

Over the centuries, astronomers have developed a battery of techniques to resolve this tricky conundrum. The first step involves the fact that terrestrial telescopes share the Earth’s motion around the Sun, so we’re not actually observing stars in the sky from the same vantage point all year round. Observed from opposite extremes of the Earth’s orbit (i.e. at an interval of six months) a star appears to change position in the sky, an effect known as parallax. If the size of the Earth’s orbit is known, which it is, an accurate measurement of the change of angular position of the star can yield its distance.

The problem is that this effect is tiny, even for nearby stars, and it is immeasurably small for distant ones. Nevertheless, this method has successfully established the first “rung” on a cosmic distance ladder. Sufficiently many stellar distances have been measured this way to enable astronomers to understand and classify different types of star by their intrinsic properties. A particular type of variable star called a Cepheid variable emerged from these studies as a form of “standard candle”; such a star pulsates with a well-defined period that depends on its intrinsic brightness so by measuring the time-variation of its apparent brightness we can tell how bright it actually is, and hence its distance. Since these stars are typically very luminous they can be observed at great distances, which can be accurately calibrated using measured parallaxes of more nearby examples.

Cepheid variables are not the only distance indicators available to astronomers, but they have proved particularly important in establishing the scale of our Universe. For centuries astronomers have known that our own star, the Sun, is just one of billions arranged in an enormous disk-like structure, our Galaxy, called the Milky Way. But dotted around the sky are curious objects known as nebulae. These do not look at all like stars; they are extended, fuzzy, objects similar in shape to the Milky Way. Could they be other galaxies, seen at enormous distances, or are they much smaller objects inside our own Galaxy?

Only a century ago nobody really knew the answer to that question. Eventually, after the construction of more powerful telescopes, astronomers spotted Cepheid variables in these nebulae and established that they were far too distant to be within the Milky Way but were in fact structures like our own Galaxy. This realization revealed the Cosmos to be much larger than most astronomers had previously imagined; conceptually speaking, the Universe had expanded. Soon, measurements of the spectra of light coming from extragalactic nebulae demonstrated that the Universe was actually expanding physically too. The evidence suggested that all distant galaxies were rushing away from our own with speed proportional to their distance from us, an effect now known as Hubble’s Law, after the astronomer Edwin Hubble who played a major role in its discovery.

A convincing theoretical interpretation of this astonishing result was only found with the adoption of Einstein’s General Theory of Relativity, a radically new conception of how gravity manifests itself as an effect of the behaviour of space-time. Whereas previously space and time were regarded as separate and absolute notions, providing an unchanging and impassive stage upon which material bodies interact, after Einstein space-time became a participant in the action, both influencing, and being influenced, by matter in motion. The space that seemed to separate galaxies from one another, was now seen to bind them together.
Hubble’s Law emerges from this picture as a natural consequence an expanding Universe, considered not as a collection of galaxies moving through static space but embedded in a space which is itself evolving dynamically. Light rays get bent and distorted as they travel through, and are influenced by, the changing landscape of space-time the encounter along their journey.

Einstein’s theory provides the theoretical foundations needed to construct a coherent framework for the interpretation of observations of the most distant astronomical objects, but only at the cost of demanding a radical reformulation of some fundamental concepts. The idea of space as an entity, with its own geometry and dynamics, is so central to general relativity that one can hardly avoid asking what it is space in itself, i.e. what is its nature? Outside astronomy we tend to regard space as being the nothingness that lies in between the “things” (i.e. material bodies of one sort or another). Alternatively, when discussing a building (such as an art gallery) “a space” is usually described in terms of the boundaries enclosing it or by the way it is lit; it does not have attributes of its own other than those it derives from something else. But space is not simply an absence of things. If it has geometry and dynamics it has to be something rather than nothing, even if the nature of that something is extremely difficult to grasp.

Recent observations, for example, suggest that even a pure vacuum of “empty space” possesses “dark energy” energy of its own. This inference hinges on the type Ia supernova, a type of stellar explosion so luminous it can (briefly) outshine an entire galaxy before gradually fading away. These cataclysmic events can be used as distance indicators because their peak brightness correlates with the rate at which they fade. Type Ia supernovae can be detected at far greater distances than Cepheids, at such huge distances in fact that the Universe might be only about half its current size when light set out from them. The problem is that the more distant supernovae look fainter, and consequently at greater distances, than expected if the expansion of the Universe were gradually slowing down, as it should if there were no dark energy.

At present there is no theory that can fully account for the existence of vacuum energy, but it is possible that it might eventually be explained by the behaviour of the quantum fields that arise in the theory of elementary particles. This could lead to a unified description of the inner space of subatomic matter and the outer space of general relativity, which has been the goal of many physicists for a considerable time. That would be a spectacular achievement but, as with everything else in science, it will only work out if we have the correct conceptual framework.

## A Dark Energy Mission

Posted in The Universe and Stuff with tags , , on November 16, 2013 by telescoper

Here’s a challenge for cosmologists and aspiring science communicators out there. Most of you will know the standard cosmological model involves a thing, called Dark Energy, whose existence is inferred from observations that suggest that the expansion of the Universe appears to be accelerating.

That these observations require something a bit weird can be quickly seen by looking at the equation that governs the dynamics of the cosmic scale factor $R$ for a simple model involving matter in the form of a perfect fluid:

$\ddot{R}=-\frac{4\pi G}{3} \left( \rho + \frac{3p}{c^2}\right) R$

The terms in brackets relate to the density and pressure of the fluid, respectively. If the pressure is negligible (as is the case for “dust”), then the expansion is always decelerating because the density of matter is always positive quantity; we don’t know of anything that has a negative mass.

The only way to make the expansion of such a universe actually accelerate is to fill it with some sort of stuff that has

$\left( \rho + \frac{3p}{c^2} \right) < 0.$

In the lingo this means that the strong energy condition must be violated; this is what the hypothetical dark energy component is introduced to do. Note that this requires the dark energy to exert negative pressure, ie it has to be, in some sense, in tension.

However, there’s something about this that seems very paradoxical. Pressure generates a force that pushes, tension corresponds to a force that pulls. In the cosmological setting, though, increasing positive pressure causes a greater deceleration while to make the universe accelerate requires tension. Why should a bigger pushing force cause the universe to slow down, while a pull causes it to speed up?

The lazy answer is to point at the equation and say “that’s what the mathematics says”, but that’s no use at all when you want to explain this to Joe Public.

Your mission, should you choose to accept it, is to explain in language appropriate to a non-expert, why a pull seems to cause a push…

Your attempts through the comments box please!

## Tension in Cosmology?

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , on October 24, 2013 by telescoper

I noticed this abstract (of a paper by Rest et al.) on the arXiv the other day:

We present griz light curves of 146 spectroscopically confirmed Type Ia Supernovae (0.03<z<0.65) discovered during the first 1.5 years of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We have investigated spatial and time variations in the photometry, and we find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the HST Calspec definition of the AB system. We discuss our efforts to minimize the systematic uncertainties in the photometry. A Hubble diagram is constructed with a subset of 112 SNe Ia (out of the 146) that pass our light curve quality cuts. The cosmological fit to 313 SNe Ia (112 PS1 SNe Ia + 201 low-z SNe Ia), using only SNe and assuming a constant dark energy equation of state and flatness, yields w = -1.015^{+0.319}_{-0.201}(Stat)+{0.164}_{-0.122}(Sys). When combined with BAO+CMB(Planck)+H0, the analysis yields \Omega_M = 0.277^{+0.010}_{-0.012} and w = -1.186^{+0.076}_{-0.065} including all identified systematics, as spelled out in the companion paper by Scolnic et al. (2013a). The value of w is inconsistent with the cosmological constant value of -1 at the 2.4 sigma level. This tension has been seen in other high-z SN surveys and endures after removing either the BAO or the H0 constraint. If we include WMAP9 CMB constraints instead of those from Planck, we find w = -1.142^{+0.076}_{-0.087}, which diminishes the discord to <2 sigma. We cannot conclude whether the tension with flat CDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 supernova sample will be 3 times as large as this initial sample, which should provide more conclusive results.

The mysterious Pan-STARRS stands for the Panoramic Survey Telescope and Rapid Response System, a set of telescopes cameras and related computing hardware that monitors the sky from its base in Hawaii. One of the many things this system can do is detect and measure distant supernovae, hence the particular application to cosmology described in the paper. The abstract mentions a preliminary measurement of the parameter w, which for those of you who are not experts in cosmology is usually called the “equation of state” parameter for the dark energy component involved in the standard model. What it describes is the relationship between the pressure P and the energy density ρc2 of this mysterious stuff, via the relation P=wρc2. The particularly interesting case is w=-1 which corresponds to a cosmological constant term; see here for a technical discussion. However, we don’t know how to explain this dark energy from first principles so really w is a parameter that describes our ignorance of what is actually going on. In other words, the cosmological constant provides the simplest model of dark energy but even in that case we don’t know where it comes from so it might well be something different; estimating w from surveys can therefore tell us whether we’re on the right track or not.

The abstract explains that, within the errors, the Pan-STARRS data on their own are consistent with w=-1. More interestingly, though, combining the supernovae observations with others, the best-fit value of w shifts towards a value a bit less than -1 (although still with quite a large uncertainty). Incidentally  value of w less than -1 is generally described as a “phantom” dark energy component. I’ve never really understood why…

So far estimates of cosmological parameters from different data sets have broadly agreed with each other, hence the application of the word “concordance” to the standard cosmological model.  However, it does seem to be the case that supernova measurements do generally seem to push cosmological parameter estimates away from the comfort zone established by other types of observation. Could this apparent discordance be signalling that our ideas are wrong?

That’s the line pursued by a Scientific American article on this paper entitled “Leading Dark Energy Theory Incompatible with New Measurement”. This could be true, but I think it’s a bit early to be taking this line when there are still questions to be answered about the photometric accuracy of the Pan-Starrs survey. The headline I would have picked would be more like “New Measurement (Possibly) Incompatible With Other Measurements of Dark Energy”.

But that would have been boring…

## Updates for Cosmology: A Very Short Introduction?

Posted in Books, Talks and Reviews, The Universe and Stuff with tags , , , , , on October 21, 2013 by telescoper

Yet another very busy day, travelling in the morning and then in meetings all afternoon, so just time for another brief post. I thought I’d take the opportunity to do a little bit of crowdsourcing…

A few days ago I was contacted by Oxford University Press who are apparently considering the possibility of a second edition of my little book Cosmology: A Very Short Introduction, which is part of an extensive series of intensive books on all kinds of subjects.

I really enjoyed writing this book, despite the tough challenge of trying to cover the whole of cosmology in less than 35,000 words and was very pleased with the way it turned out. It has sold over 25000 copies in English and has been published in several other languages.

It is meant to be accessible to the interested layperson but the constraints imposed by the format mean it goes fairly quickly through some quite difficult concepts. Judging by the reviews, though, most people seem to think it gives a useful introduction to the subject, although you can’t please all of the people all of the time!

However, the book was published way back in 2001 and, well, one or two things have happened in the field of cosmology since then.  I have in fact had a number of emails from people asking whether there was going to be a new edition to include the latest developments, but the book is part of a very large series and it was basically up to the publisher to decide whether it wanted to update some, all or none of the series.

Now it seems the powers that be at OUP have decided to explore the possibility further and have asked me to make a pitch for a new edition.  I have some ideas of things that would have to be revised – the section on Dark Energy definitely needs to be updated, and of course first WMAP and then Planck have refined our view of the cosmic microwave background pretty comprehensively?

Anyway, I thought it would be fun to ask people out there who have read it, or even those who haven’t, what they feel I should change for a new edition if there is to be one. That might include new topics or revisions of things that could be improved. Your comments are therefore invited via the famous Comments Box. Please bear in mind that any new edition will be also constrained to be no more than 35,000 words.

Oh, and if you haven’t seen the First Edition at all, why not rush out and buy a copy before it’s too late? I understand you can snap up a copy for just £3 while stocks last. I can assure you all the royalties will go to an excellent cause. Me.

## Science, Religion and Henry Gee

Posted in Bad Statistics, Books, Talks and Reviews, Science Politics, The Universe and Stuff with tags , , , , , , , , , on September 23, 2013 by telescoper

Last week a piece appeared on the Grauniad website by Henry Gee who is a Senior Editor at the magazine Nature.  I was prepared to get a bit snarky about the article when I saw the title, as it reminded me of an old  rant about science being just a kind of religion by Simon Jenkins that got me quite annoyed a few years ago. Henry Gee’s article, however, is actually rather more coherent than that and  not really deserving of some of the invective being flung at it.

For example, here’s an excerpt that I almost agree with:

One thing that never gets emphasised enough in science, or in schools, or anywhere else, is that no matter how fancy-schmancy your statistical technique, the output is always a probability level (a P-value), the “significance” of which is left for you to judge – based on nothing more concrete or substantive than a feeling, based on the imponderables of personal or shared experience. Statistics, and therefore science, can only advise on probability – they cannot determine The Truth. And Truth, with a capital T, is forever just beyond one’s grasp.

I’ve made the point on this blog many times that, although statistical reasoning lies at the heart of the scientific method, we don’t do anywhere near enough  to teach students how to use probability properly; nor do scientists do enough to explain the uncertainties in their results to decision makers and the general public.  I also agree with the concluding thought, that science isn’t about absolute truths. Unfortunately, Gee undermines his credibility by equating statistical reasoning with p-values which, in my opinion, are a frequentist aberration that contributes greatly to the public misunderstanding of science. Worse, he even gets the wrong statistics wrong…

But the main thing that bothers me about Gee’s article is that he blames scientists for promulgating the myth of “science-as-religion”. I don’t think that’s fair at all. Most scientists I know are perfectly well aware of the limitations of what they do. It’s really the media that want to portray everything in simple black and white terms. Some scientists play along, of course, as I comment upon below, but most of us are not priests but pragmatatists.

Anyway, this episode gives me the excuse to point out  that I ended a book I wrote in 1998 with a discussion of the image of science as a kind of priesthood which it seems apt to repeat here. The book was about the famous eclipse expedition of 1919 that provided some degree of experimental confirmation of Einstein’s general theory of relativity and which I blogged about at some length last year, on its 90th anniversary.

I decided to post the last few paragraphs here to show that I do think there is a valuable point to be made out of the scientist-as-priest idea. It’s to do with the responsibility scientists have to be honest about the limitations of their research and the uncertainties that surround any new discovery. Science has done great things for humanity, but it is fallible. Too many scientists are too certain about things that are far from proven. This can be damaging to science itself, as well as to the public perception of it. Bandwagons proliferate, stifling original ideas and leading to the construction of self-serving cartels. This is a fertile environment for conspiracy theories to flourish.

To my mind the thing  that really separates science from religion is that science is an investigative process, not a collection of truths. Each answer simply opens up more questions.  The public tends to see science as a collection of “facts” rather than a process of investigation. The scientific method has taught us a great deal about the way our Universe works, not through the exercise of blind faith but through the painstaking interplay of theory, experiment and observation.

This is what I wrote in 1998:

Science does not deal with ‘rights’ and ‘wrongs’. It deals instead with descriptions of reality that are either ‘useful’ or ‘not useful’. Newton’s theory of gravity was not shown to be ‘wrong’ by the eclipse expedition. It was merely shown that there were some phenomena it could not describe, and for which a more sophisticated theory was required. But Newton’s theory still yields perfectly reliable predictions in many situations, including, for example, the timing of total solar eclipses. When a theory is shown to be useful in a wide range of situations, it becomes part of our standard model of the world. But this doesn’t make it true, because we will never know whether future experiments may supersede it. It may well be the case that physical situations will be found where general relativity is supplanted by another theory of gravity. Indeed, physicists already know that Einstein’s theory breaks down when matter is so dense that quantum effects become important. Einstein himself realised that this would probably happen to his theory.

Putting together the material for this book, I was struck by the many parallels between the events of 1919 and coverage of similar topics in the newspapers of 1999. One of the hot topics for the media in January 1999, for example, has been the discovery by an international team of astronomers that distant exploding stars called supernovae are much fainter than had been predicted. To cut a long story short, this means that these objects are thought to be much further away than expected. The inference then is that not only is the Universe expanding, but it is doing so at a faster and faster rate as time passes. In other words, the Universe is accelerating. The only way that modern theories can account for this acceleration is to suggest that there is an additional source of energy pervading the very vacuum of space. These observations therefore hold profound implications for fundamental physics.

As always seems to be the case, the press present these observations as bald facts. As an astrophysicist, I know very well that they are far from unchallenged by the astronomical community. Lively debates about these results occur regularly at scientific meetings, and their status is far from established. In fact, only a year or two ago, precisely the same team was arguing for exactly the opposite conclusion based on their earlier data. But the media don’t seem to like representing science the way it actually is, as an arena in which ideas are vigorously debated and each result is presented with caveats and careful analysis of possible error. They prefer instead to portray scientists as priests, laying down the law without equivocation. The more esoteric the theory, the further it is beyond the grasp of the non-specialist, the more exalted is the priest. It is not that the public want to know – they want not to know but to believe.

Things seem to have been the same in 1919. Although the results from Sobral and Principe had then not received independent confirmation from other experiments, just as the new supernova experiments have not, they were still presented to the public at large as being definitive proof of something very profound. That the eclipse measurements later received confirmation is not the point. This kind of reporting can elevate scientists, at least temporarily, to the priesthood, but does nothing to bridge the ever-widening gap between what scientists do and what the public think they do.

As we enter a new Millennium, science continues to expand into areas still further beyond the comprehension of the general public. Particle physicists want to understand the structure of matter on tinier and tinier scales of length and time. Astronomers want to know how stars, galaxies  and life itself came into being. But not only is the theoretical ambition of science getting bigger. Experimental tests of modern particle theories require methods capable of probing objects a tiny fraction of the size of the nucleus of an atom. With devices such as the Hubble Space Telescope, astronomers can gather light that comes from sources so distant that it has taken most of the age of the Universe to reach us from them. But extending these experimental methods still further will require yet more money to be spent. At the same time that science reaches further and further beyond the general public, the more it relies on their taxes.

Many modern scientists themselves play a dangerous game with the truth, pushing their results one-sidedly into the media as part of the cut-throat battle for a share of scarce research funding. There may be short-term rewards, in grants and TV appearances, but in the long run the impact on the relationship between science and society can only be bad. The public responded to Einstein with unqualified admiration, but Big Science later gave the world nuclear weapons. The distorted image of scientist-as-priest is likely to lead only to alienation and further loss of public respect. Science is not a religion, and should not pretend to be one.

PS. You will note that I was voicing doubts about the interpretation of the early results from supernovae  in 1998 that suggested the universe might be accelerating and that dark energy might be the reason for its behaviour. Although more evidence supporting this interpretation has since emerged from WMAP and other sources, I remain sceptical that we cosmologists are on the right track about this. Don’t get me wrong – I think the standard cosmological model is the best working hypothesis we have _ I just think we’re probably missing some important pieces of the puzzle. I don’t apologise for that. I think sceptical is what a scientist should be.

## Mingus – Oh Yeah!

Posted in Jazz, The Universe and Stuff with tags , , , , , on January 10, 2013 by telescoper

I noticed a news item this morning which explains that the Supernova Cosmology Project have found a supernova with a redshift of 1.71, which makes it the most distant one found so far  (about 10 billion light-years away).  That – and hopefully others at similar distances – should prove immensely useful  for working out how the expansion rate of the Universe has changed over its history and hence yield important clues about the nature of its contents, particularly the mysterious dark energy.

Of particular relevance to this blog is the name given to this supernova, Mingus, after the jazz musician and composer Charles Mingus. Both the discovery and the great choice of name are grounds for celebration, so here’s one of my favourite Mingus tracks – the delightfully carefree and exuberant Eat that Chicken, from the Album Oh Yeah. Enjoy!

## A Little Bit of Gravitational Lensing

Posted in The Universe and Stuff with tags , , , , , on December 30, 2012 by telescoper

I thought I’d take a short break from doing absolutely nothing to post a quick little item about gravitational lensing. It’s been in my mind to say something about this since I mentioned it in one of the lectures I gave just before Christmas, but I’ve been too busy (actually too disorganized) to do it until now. It’s all based on a paper posted to the arXiv in December which was led by Jo Woodward (née Short) who did her PhD with me in Cardiff and is now in a postdoctoral research position in Durham (which is in the Midlands). The following pictures were take from her paper.

This figure shows the geometry of a gravitational lens system: light from the source S is deflected by the gravitational potential of the lens L so that an image I appears at a position on the sky which is different from the actual position when viewed by the observer O:

There’s a critical radius (which depends on the mass and density profile of the lens) at which this can lead to the formation of multiple images of the source. Even if multiple images are not resolved, lensing results in an increase in the apparent brightness of the source.

A great deal of cosmological information can be gleaned statistically from lensing  with even limited knowledge of the properties of the source and lens populations and with incomplete information about e.g. the actual angular deflection produced by the lens or the lens mass. To illustrate this, just consider the expression for the differential optical depth to lensing (related to the probability that a source at redshift $z_s$ is lensed by an object at redshift $z_l$

The first two terms are cosmological, accounting geometrical and expansion effects. Roughly speaking, the larger the volume out to a given redshift the higher the probability is that a given source will be lensed. The third term involves the mass function of lens systems. In the framework of the standard cosmological model this can be computed using Press-Schechter theory or one of the variations thereof. According to current understanding, cosmological structures (i.e. galaxies and clusters of galaxies) form hierarchically so this mass function changes with redshift, with fewer high mass objects present at high redshift than at low redshift, as represented in this picture, in which masses are given in units of solar masses, the colour-coding representing different redshifts:

The last term represents the lensing cross-section of an object with a given mass. This depends on the internal structure of the lens – an object in which the mass is highly concentrated produces  lensing effects radically different from one that isn’t. Two simple models for the mass distribution are the singular isothermal sphere (SIS) and the Navarro-Frenk-White profile (NFW). The latter is thought (by some) to represent the distribution of cold dark matter in haloes around galaxies and clusters which is more diffuse than that of the baryonic material because it can’t dissipate energy which it needs to do to fall into the centre of the object. The real potential of a galaxy in its central regions could be more like the SIS profile would predict, however, because baryons outweigh dark matter there.

Now time for a bit of historical reminiscence. In 1997 I published a book with George Ellis in which we analysed the evidence available at the time relating to the density of matter in the Universe. It was a little bit controversial at the time, but it turns out we were correct in concluding that the density of matter was well below the level favoured by most theorists i.e. only about 20-30% of the critical density. However we did not find any compelling evidence at that time for a cosmological constant (or, if you prefer, dark energy). Indeed one of the strongest upper limits on the cosmological constant came from gravitational lensing measurements, or rather the dearth of them.

The reason for this negative conclusion was that, for a fixed value of the Hubble constant,  in the presence of a cosmological constant the volume out to a given redshift is much larger than if there is no cosmological constant. That means the above integral predicts a high probability for lensing. Surveys however failed to turn up large numbers of strongly-lensed objects, hence the inference that the universe could not be dominated by a cosmological constant. This is, of course, assuming that the other terms in the integral are well understood and that the reason significant numbers of lensed systems weren’t found wasn’t just they are tricky to identify…

Meanwhile, huge advances were made in other aspects of observational cosmology that established a standard cosmological model in which the cosmological constant makes up almost 75% of the energy budget of the Universe.

Now, 15 years later on, enter the Herschel Space Observatory, which turns out to be superb at identifying gravitational lenses.  I posted about this here, in fact. Working in the far-infrared makes it impossible to resolve multiple images with Herschel – even with a 3.5m mirror in space, λ/D isn’t great for wavelengths of 500 microns! However, the vast majority of sources found during the Herschel ATLAS survey with large fluxes at this wavelengths can be identified as lenses simply because their brightness tells us they’ve probably been magnified by a lens. Candidates can then be followed up with other telescopes on the ground.  A quick look during the Science Demonstration Phase of Herschel produced the first crop of firmly identified gravitational lens systems published in Science by Negrello et al..  When the full data set has been analysed there should be hundreds of such systems, which will revolutionize this field.

To see the potential (no pun intended) of this kind of measurement, take a look at these five systems from the SDP set:

These systems have measured (or estimated) source and lens redshifts. What is plotted is the conditional probability of a lens at some particular lens redshift, given the source redshift and the fact that strong lensing has occurred. Curves are given for SIS and NFW lens profiles and everything else is calculated according to the standard cosmological model. The green bars represent the measured lens redshifts.  It’s early days, so there are only five systems, but you can already see that they are pointing towards low lens redshifts, favouring NFW over SIS;  the yellow and light blue shading represents regions in which 68% of the likelihood lies.  These data don’t strongly prefer one model over the other, but with hundreds more, and extra information about at least some of the lens systems (such as detailed determinations of the lens mass from deflections etc) we should be able  to form more definite conclusions.

Unfortunately the proposal I submitted to STFC to develop a more detailed theoretical model and statistical analysis pipeline (Bayesian, of course) wasn’t funded. C’est la vie. That probably just means that someone smarter and quicker than me will do the necessary…