## A Little Bit of Gravitational Lensing

I thought I’d take a short break from doing absolutely nothing to post a quick little item about gravitational lensing. It’s been in my mind to say something about this since I mentioned it in one of the lectures I gave just before Christmas, but I’ve been too busy (actually too disorganized) to do it until now. It’s all based on a paper posted to the arXiv in December which was led by Jo Woodward (née Short) who did her PhD with me in Cardiff and is now in a postdoctoral research position in Durham (which is in the Midlands). The following pictures were take from her paper.

This figure shows the geometry of a gravitational lens system: light from the source S is deflected by the gravitational potential of the lens L so that an image I appears at a position on the sky which is different from the actual position when viewed by the observer O:

There’s a critical radius (which depends on the mass and density profile of the lens) at which this can lead to the formation of multiple images of the source. Even if multiple images are not resolved, lensing results in an increase in the apparent brightness of the source.

A great deal of cosmological information can be gleaned statistically from lensing  with even limited knowledge of the properties of the source and lens populations and with incomplete information about e.g. the actual angular deflection produced by the lens or the lens mass. To illustrate this, just consider the expression for the differential optical depth to lensing (related to the probability that a source at redshift $z_s$ is lensed by an object at redshift $z_l$

The first two terms are cosmological, accounting geometrical and expansion effects. Roughly speaking, the larger the volume out to a given redshift the higher the probability is that a given source will be lensed. The third term involves the mass function of lens systems. In the framework of the standard cosmological model this can be computed using Press-Schechter theory or one of the variations thereof. According to current understanding, cosmological structures (i.e. galaxies and clusters of galaxies) form hierarchically so this mass function changes with redshift, with fewer high mass objects present at high redshift than at low redshift, as represented in this picture, in which masses are given in units of solar masses, the colour-coding representing different redshifts:

The last term represents the lensing cross-section of an object with a given mass. This depends on the internal structure of the lens – an object in which the mass is highly concentrated produces  lensing effects radically different from one that isn’t. Two simple models for the mass distribution are the singular isothermal sphere (SIS) and the Navarro-Frenk-White profile (NFW). The latter is thought (by some) to represent the distribution of cold dark matter in haloes around galaxies and clusters which is more diffuse than that of the baryonic material because it can’t dissipate energy which it needs to do to fall into the centre of the object. The real potential of a galaxy in its central regions could be more like the SIS profile would predict, however, because baryons outweigh dark matter there.

Now time for a bit of historical reminiscence. In 1997 I published a book with George Ellis in which we analysed the evidence available at the time relating to the density of matter in the Universe. It was a little bit controversial at the time, but it turns out we were correct in concluding that the density of matter was well below the level favoured by most theorists i.e. only about 20-30% of the critical density. However we did not find any compelling evidence at that time for a cosmological constant (or, if you prefer, dark energy). Indeed one of the strongest upper limits on the cosmological constant came from gravitational lensing measurements, or rather the dearth of them.

The reason for this negative conclusion was that, for a fixed value of the Hubble constant,  in the presence of a cosmological constant the volume out to a given redshift is much larger than if there is no cosmological constant. That means the above integral predicts a high probability for lensing. Surveys however failed to turn up large numbers of strongly-lensed objects, hence the inference that the universe could not be dominated by a cosmological constant. This is, of course, assuming that the other terms in the integral are well understood and that the reason significant numbers of lensed systems weren’t found wasn’t just they are tricky to identify…

Meanwhile, huge advances were made in other aspects of observational cosmology that established a standard cosmological model in which the cosmological constant makes up almost 75% of the energy budget of the Universe.

Now, 15 years later on, enter the Herschel Space Observatory, which turns out to be superb at identifying gravitational lenses.  I posted about this here, in fact. Working in the far-infrared makes it impossible to resolve multiple images with Herschel – even with a 3.5m mirror in space, λ/D isn’t great for wavelengths of 500 microns! However, the vast majority of sources found during the Herschel ATLAS survey with large fluxes at this wavelengths can be identified as lenses simply because their brightness tells us they’ve probably been magnified by a lens. Candidates can then be followed up with other telescopes on the ground.  A quick look during the Science Demonstration Phase of Herschel produced the first crop of firmly identified gravitational lens systems published in Science by Negrello et al..  When the full data set has been analysed there should be hundreds of such systems, which will revolutionize this field.

To see the potential (no pun intended) of this kind of measurement, take a look at these five systems from the SDP set:

These systems have measured (or estimated) source and lens redshifts. What is plotted is the conditional probability of a lens at some particular lens redshift, given the source redshift and the fact that strong lensing has occurred. Curves are given for SIS and NFW lens profiles and everything else is calculated according to the standard cosmological model. The green bars represent the measured lens redshifts.  It’s early days, so there are only five systems, but you can already see that they are pointing towards low lens redshifts, favouring NFW over SIS;  the yellow and light blue shading represents regions in which 68% of the likelihood lies.  These data don’t strongly prefer one model over the other, but with hundreds more, and extra information about at least some of the lens systems (such as detailed determinations of the lens mass from deflections etc) we should be able  to form more definite conclusions.

Unfortunately the proposal I submitted to STFC to develop a more detailed theoretical model and statistical analysis pipeline (Bayesian, of course) wasn’t funded. C’est la vie. That probably just means that someone smarter and quicker than me will do the necessary…

### 9 Responses to “A Little Bit of Gravitational Lensing”

1. …a postdoctoral research position in Durham (which is in the Midlands) …

I realise that a Gordy might view it differently but I think you will find that most people are of the opinion that Durham is in the North(east) of England.

2. Steve Warren Says:

You seem to assume circular symmetry? Perhaps allowing for a distribution of ellipticities was in your proposal.

• telescoper Says:

Yes, this all assumes spherical lenses in fact which isn’t right. It’s not too hard to include ellipsoidal distributions.

3. surely the predicted number of lensed sources above some flux limit (the relevant equation, as the optical depth isn’t observable) also depends sensitively on the (unknown?) luminosity evolution of the source population? i always thought that this was the reason that these sorts of tests failed when applied to QSOs (you need to know how many luminous QSOs there are out there in the absence of lensing – but you only have the post-lensed population to work with).

and you should make sure that the ladies and gentlemen of your audience don’t get the impression you got nothing for xmas from STFC… didn’t they fund a CMB PDRA and fEC for you?

happy new year from the north east of england! – ian

• telescoper Says:

Peter

The conditional distribution I plotted doesn’t depend on the source population – just that there is a source that has been lensed. The x-axis extends from zero to the known source redshift. But for this quick look we try to finesse that problem. I agree though that understanding the sources population better could allow one to do a lot more…

And yes my other project did get funded, which in the current climate I’m mighty relieved about! I’ve already transferred the funding to Sussex, actually, with the generous agreement of Cardiff.

Have a happy new year in the North Midlands,

Peter

• but “lensed” isn’t a binary state of being – its a continuum… indeed everything at a reasonable distance is “lensed”.

so (as always) it comes down to the sample defintion – what precisely is required for a source to be called “lensed” and what other observational requirements does it need to fulfil to make it into the sample?

this is why these bayesian black boxes are dangerous – garbage in = garbage out.

ian

• telescoper Says:

Indeed. The definition we took here (which I apologize if I didn’t make it clear in the post) is that multiple are formed. There’s the question of image separation, resolution etc.

It’s possible to look at the overall statistical effect of lensing on the source population but that involves doing more than the simple optical depth calculation outlined above, by e.g. constructing the complete distribution of magnifications.

4. I’ve written a few papers on similar topics which one can find at http://www.astro.multivax.de:8000/helbig/research/publications/publications.html for more details. Indeed, the early lens-survey results, based mainly on messy optical surveys, did not favour a large cosmological constant. However, newer, mainly radio-based surveys have changed things. (This is not uncommon; the first SNCP paper argued in favour of an Einstein-de Sitter universe, though with large uncertainties. It turns out that their early sample was dominated by a statistical fluke.)

As Peter mentioned, the lens-redshift test, first proposed by Kochanek, is independent of the luminosity function of the sources (and, to make things difficult, what one needs is the luminosity function as a function or redshift or, equivalently, the redshift distribution as a function of flux density—or course the apparent flux density, not already transformed into physical quantities because they are dependent on cosmology and we want to use the observations to determine the cosmology (I’m looking at you Dunlop and Peacock).) It is certainly a cleaner test, but doesn’t make use of all the data one has.

Using more data can be better, but one needs the uncertainties in the input parameters, particular the luminosity function stuff at the unlensed brightness. There is a paper on the list above which describes such observations for CLASS, the Cosmic Lens All-Sky Survey. This is observationally difficult because one needs redshifts for sources which are selected based only on radio luminosities. The radio positions are extremely accurate, of course, so one can just point and shoot, but if there is no optical ID at all, it is not clear in advance how much time one needs on what size telescope, making such programmes rather unpopular with TACs.

With the lens-redshift test, one needs to take into account selection effects; one can’t just take a list of systems with measured source and lens redshifts, since this might be a biased sample. What one needs is a sample defined in advance, then do the analysis when all redshifts have been measured. Since selection doesn’t involve the lens-galaxy brightness, one has to spend time getting the redshifts of faint lens galaxies, only to make the sample complete. Again, not popular with TACs.

15 years ago, the analysis of gravitational-lens surveys gave comparable uncertainties in the cosmological parameters as the CMB and SNIa magnitude-redshift relation. The uncertainties in the lensing analysis have not come down as much, so the results agree but don’t really provide additional constraints anymore. The main reason for the uncertainty is the m-z relation for the unlensed source population, discussed above. IIRC, the Herschel folks hope to get around this by using photometric redshifts. Nothing wrong with this, in principle, but of course this is then an input quantity with its own uncertainty, which has to be taken into account, and not a measured lens redshift, which for all practical purposes is exact.

Now that the cosmological parameters are tied down rather well, such analyses in the future might be more useful for measuring the evolution of galaxy masses, i.e. inverting the traditional method.

The proper analysis of such a survey is rather involved computationally. This is good, because no-one can achieve us with playing around with the input parameters until we got the “desired” result. This would have been way to much computation back in the 1990s. Of course, the input parameters come from the refereed literature, usually values measured without lensing surveys in mind, so there is no fudge factor anyway.