Archive for WMAP

The Laws of Extremely Improbable Things

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , on June 9, 2011 by telescoper

After a couple of boozy nights in Copenhagen during the workshop which has just finished, I thought I’d take things easy this evening and make use of the free internet connection in my hotel to post a short item about something I talked about at the workshop here.

Actually I’ve been meaning to mention a nice bit of statistical theory called Extreme Value Theory on here for some time, because not so many people seem to be aware of it, but somehow I never got around to writing about it. People generally assume that statistical analysis of data revolves around “typical” quantities, such as averages or root-mean-square fluctuations (i.e. “standard” deviations). Sometimes, however, it’s not the typical points that are interesting, but those that appear to be drawn from the extreme tails of a probability distribution. This is particularly the case in planning for floods and other natural disasters, but this field also finds a number of interesting applications in astrophysics and cosmology. What should be the mass of the most massive cluster in my galaxy survey? How bright the brightest galaxy? How hot the hottest hotspot in the distribution of temperature fluctuations on the cosmic microwave background sky? And how cold the coldest? Sometimes just one anomalous event can be enormously useful in testing a theory.

I’m not going to go into the theory in any great depth here. Instead I’ll just give you a simple idea of how things work. First imagine you have a set of n observations labelled X_i. Assume that these are independent and identically distributed with a distribution function F(x), i.e.

\Pr(X_i\leq x)=F(x)

Now suppose you locate the largest value in the sample, X_{\rm max}. What is the distribution of this value? The answer is not F(x), but it is quite easy to work out because the probability that the largest value is less than or equal to, say, z is just the probability that each one is less than or equal to that value, i.e.

F_{\rm max}(z) = \Pr \left(X_{\rm max}\leq z\right)= \Pr \left(X_1\leq z, X_2\leq z\ldots, X_n\leq z\right)

Because the variables are independent and identically distributed, this means that

F_{\rm max} (z) = \left[ F(z) \right]^n

The probability density function associated with this is then just

f_{\rm max}(z) = n f(z) \left[ F(z) \right]^{n-1}

In a situation in which F(x) is known and in which the other assumptions apply, then this simple result offers the best way to proceed in analysing extreme values.

The mathematical interest in extreme values however derives from a paper in 1928 by Fisher \& Tippett which paved the way towards a general theory of extreme value distributions. I don’t want to go too much into details about that, but I will give a flavour by mentioning a historically important, perhaps surprising, and in any case rather illuminating example.

It turns out that for any distribution F(x) of exponential type, which means that

\lim_{x\rightarrow\infty} \frac{1-F(x)}{f(x)} = 0

then there is a stable asymptotic distribution of extreme values, as n \rightarrow \infty which is independent of the underlying distribution, F(x), and which has the form

G(z) = \exp \left(-\exp \left( -\frac{(z-a_n)}{b_n} \right)\right)

where a_n and b_n are location and scale parameters; this is called the Gumbel distribution. It’s not often you come across functions of the form e^{-e^{-y}}!

This result, and others, has established a robust and powerful framework for modelling extreme events. One of course has to be particularly careful if the variables involved are not independent (e.g. part of correlated sequences) or if there are not identically distributed (e.g. if the distribution is changing with time). One also has to be aware of the possibility that an extreme data point may simply be some sort of glitch (e.g. a cosmic ray hit on a pixel, to give an astronomical example). It should also be mentioned that the asymptotic theory is what it says on the tin – asymptotic. Some distributions of exponential type converge extremely slowly to the asymptotic form. A notable example is the Gaussian, which converges at the pathetically slow rate of \sqrt{\ln(n)}! This is why I advocate using the exact distribution resulting from a fully specified model whenever this is possible.

The pitfalls are dangerous and have no doubt led to numerous misapplications of this theory, but, done properly, it’s an approach that has enormous potential.

I’ve been interested in this branch of statistical theory for a long time, since I was introduced to it while I was a graduate student by a classic paper written by my supervisor. In fact I myself contributed to the classic old literature on this topic myself, with a paper on extreme temperature fluctuations in the cosmic microwave background way back in 1988..

Of course there weren’t any CMB maps back in 1988, and if I had thought more about it at the time I should have realised that since this was all done using Gaussian statistics, there was a 50% chance that the most interesting feature would actually be a negative rather than positive fluctuation. It turns out that twenty-odd years on, people are actually discussing an anomalous cold spot in the data from WMAP, proving that Murphy’s law applies to extreme events…

Doubts about the Evidence for Penrose’s Cyclic Universe

Posted in Bad Statistics, Cosmic Anomalies, The Universe and Stuff with tags , , , , , , on November 28, 2010 by telescoper

A strange paper by Gurzadyan and Penrose hit the Arxiv a week or so ago. It seems to have generated quite a lot of reaction in the blogosphere and has now made it onto the BBC News, so I think it merits a comment.

The authors claim to have found evidence that supports Roger Penrose‘s conformal cyclic cosmology in the form of a series of (concentric) rings of unexpectedly low variance in the pattern of fluctuations in the cosmic microwave background seen by the Wilkinson Microwave Anisotropy Probe (WMAP). There’s no doubt that a real discovery of such signals in the WMAP data would point towards something radically different from the standard Big Bang cosmology.

I haven’t tried to reproduce Gurzadyan & Penrose’s result in detail, as I haven’t had time to look at it, and I’m not going to rule it out without doing a careful analysis myself. However, what I will say here is that I think you should take the statistical part of their analysis with a huge pinch of salt.

Here’s why.

The authors report a hugely significant detection of their effect (they quote a “6-σ” result; in other words, the expected feature is expected to arise in the standard cosmological model with a probability of less than 10-7. The type of signal can be seen in their Figure 2, which I reproduce here:

Sorry they’re hard to read, but these show the variance measured on concentric rings (y-axis) of varying radius (x-axis) as seen in the WMAP W (94 Ghz) and V (54 Ghz) frequency channels (top two panels) compared with what is seen in a simulation with purely Gaussian fluctuations generated within the framework of the standard cosmological model (lower panel). The contrast looks superficially impressive, but there’s much less to it than meets the eye.

For a start, the separate WMAP W and V channels are not the same as the cosmic microwave background. There is a great deal of galactic foreground that has to be cleaned out of these maps before the pristine primordial radiation can be isolated. The fact similar patterns can be found in the BOOMERANG data by no means rules out a foreground contribution as a common explanation of anomalous variance. The authors have excluded the region at low galactic latitude (|b|<20°) in order to avoid the most heavily contaminated parts of the sky, but this is by no means guaranteed to eliminate foreground contributions entirely. Here is the all-sky WMAP W-band map for example:

Moreover, these maps also contain considerable systematic effects arising from the scanning strategy of the WMAP satellite. The most obvious of these is that the signal-to-noise varies across the sky, but there are others, such as the finite size of the beam of the WMAP telescope.

Neither galactic foregrounds nor correlated noise are present in the Gaussian simulation shown in the lower panel, and the authors do not say what kind of beam smoothing is used either. The comparison of WMAP single-channel data with simple Gaussian simulations is consequently deeply flawed and the significance level quoted for the result is certainly meaningless.

Having not looked looked at this in detail myself I’m not going to say that the authors’ conclusions are necessarily false, but I would be very surprised if an effect this large was real given the strenuous efforts so many people have made to probe the detailed statistics of the WMAP data; see, e.g., various items in my blog category on cosmic anomalies. Cosmologists have been wrong before, of course, but then so have even eminent physicists like Roger Penrose…

Another point that I’m not sure about at all is even if the rings of low variance are real – which I doubt – do they really provide evidence of a cyclic universe? It doesn’t seem obvious to me that the model Penrose advocates would actually produce a CMB sky that had such properties anyway.

Above all, I stress that this paper has not been subjected to proper peer review. If I were the referee I’d demand a much higher level of rigour in the analysis before I would allow it to be published in a scientific journal. Until the analysis is done satisfactorily, I suggest that serious students of cosmology shouldn’t get too excited by this result.

It occurs to me that other cosmologists out there might have looked at this result in more detail than I have had time to. If so, please feel free to add your comments in the box…

IMPORTANT UPDATE: 7th December. Two papers have now appeared on the arXiv (here and here) which refute the Gurzadyan-Penrose claim. Apparently, the data behave as Gurzadyan and Penrose claim, but so do proper simulations. In otherwords, it’s the bottom panel of the figure that’s wrong.

ANOTHER UPDATE: 8th December. Gurzadyan and Penrose have responded with a two-page paper which makes so little sense I had better not comment at all.


Share/Bookmark

Nobel Predictions

Posted in The Universe and Stuff with tags , , , on September 24, 2010 by telescoper

I was quite interested to see, in this week’s Times Higher, a set of predictions of the winners of this years Nobel Prizes. I’ve taken the liberty of publishing the table here, although for reasons of taste I’ve removed the column pertaining to Economics.

Year Medicine Chemistry Physics
2010 D. L. Coleman, J. M. Friedman (leptin)
E. A. McCulloch, J. E. Till (stem cells)
and S. Yamanaka (iPS cells)
R. M. Steinman (dendritic cells)
P. O. Brown (DNA microarrays)
S. Kitagawa, O. M. Yaghi (metal-organic frameworks)
S. J. Lippard (metallointercalators)
C. L. Bennett, L. A. Page,
D. N. Spergel (WMAP)
T. W. Ebbesen (surface plasmon photonics)
S. Perlmutter, A. G. Riess, B. P. Schmidt (dark energy)
2009 E.H. Blackburn, C. W. Greider, J.W. Szostak (telomeres) (won in 2009)
J.E. Rothman, R. Schekman (vesicle transport)
S. Ogawa (fMRI)
M. Grätzel (solar cells)
J.K. Barton, B. Giese, G.B. Schuster (charge transfer in DNA)
B. List (organic asymmetric catalysis)
Y. Aharonov, M.V. Berry (Aharonov-Bohm effect and Berry phase)
J.I. Cirac, P. Zoller (quantum optics)
J.B. Pendry, S. Schultz, D.R. Smith (negative refraction)
2008 S. Akira, B.A. Beutler, J. Hoffmann (toll-like receptors)
V.R. Ambros, G. Ruvkun (miRNAs)
R. Collins, R. Peto (meta-analysis)
Roger Y. Tsien (green fluorescent protein)
C.M. Lieber (nanomaterials)
K. Matyjaszewski (ATRP)
A.K. Geim, K. Novoselov (graphene)
V.C. Rubin (dark matter)
R. Penrose, D. Schechtman (Penrose tilings, quasicrystals)
2007 F.H. Gage (neurogenesis)
R.J. Ellis, F.U. Hartl, A.L. Horwich (chaperones)
J. Massagué (TGF-beta)
S.J. Danishefsky (epothilones)
D. Seebach (synthetic organic methods)
B.M. Trost (organometallic and bio-organic chemistry)
S. Iijima (nanotubes)
A.B. McDonald (neutrino mass)
M.J. Rees (cosmology)
2006 Mario Capecchi, Martin J. Evans and Oliver Smithies (gene targeting) (won in 2007)
P. Chambon, R.M. Evans, E.V. Jensen (hormone receptors)
A.J. Jeffreys (DNA profiling)
G.R. Crabtree, S.L. Schreiber (small molecule chembio)
T.J. Marks (organometallic)
D.A. Evans, S.V. Ley (natural products)
Albert Fert and Peter Grünberg (GMR) (won in 2007)
A.H. Guth, A. Linde, P.J. Steinhardt (inflation)
E. Desurvire, M. Nakazawa, D.N. Payne (erbium-doped fibre amplifiers)
2002-05 M.J. Berridge (cell signalling)
A.G. Knudson, B. Vogelstein, R.A. Weinberg (tumour suppressor genes)
F.S. Collins, E.S. Lander, J.C. Venter (gene sequencing)
Robert H. Grubbs (metathesis method) (predicted and won in 2005)
A. Bax (NMR and proteins)
K.C. Nicolaou (total synthesis, taxol)
G.M. Whitesides, S. Shinkai, J.F. Stoddart (nano self-assembly)
M.B. Green, J.H. Schwarz, E. Witten (string theory)
Y. Tokura (condensed matter)
S. Nakamura (gallium nitride-based LEDs)

It’s quite interesting to see two sets of contenders from the field of cosmology, one from the Wilkinson Microwave Anisotropy Probe (WMAP) and another from the two groups studying high-redshift supernovae whose studies have led to the inference that the universe is accelerating thus indicating the presence of dark energy. Although both these studies are immensely important, I’d actually be surprised if either is the winner of the physics prize. In the case of WMAP I think it’s probably a bit too soon after the 2006 award for COBE for the microwave background to collect another prize. In the case of the supernovae searches I think it’s still too early to say that we actually know what is going on with the apparent accelerated expansion.

You never know, though, and I’d personally be delighted if either of these groups found themselves invited to Stockholm this December.

Interested to see how these predictions were made I had a quick look at the link the Times Higher kindly provided for further explanation, at which point my heart sank. I should have realised that it would be the dreaded Thomson Reuters, purveyors of unreliable numerology to the unwary. They base their predictions on the kind of bibliometric flummery of which they are expert peddlers, but which is not at all similar to the way the Nobel Foundation does its selections. No wonder, then, that their track-record in predicting Nobel prizes is so utterly abysmal…


Share/Bookmark

Publish or be Damned

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , , on August 23, 2010 by telescoper

For tonight’s post I thought I’d compose a commentary on a couple of connected controversies suggested by an interestingly provocative piece by Nigel Hawkes in the Independent this weekend entitled Peer Review journals aren’t worth the paper they’re written on. Here is an excerpt:

The truth is that peer review is largely hokum. What happens if a peer-reviewed journal rejects a paper? It gets sent to another peer-reviewed journal a bit further down the pecking order, which is happy to publish it. Peer review seldom detects fraud, or even mistakes. It is biased against women and against less famous institutions. Its benefits are statistically insignificant and its risks – academic log-rolling, suppression of unfashionable ideas, and the irresistible opportunity to put a spoke in a rival’s wheel – are seldom examined.

In contrast to many of my academic colleagues I largely agree with Nigel Hawkes, but I urge you to read the piece yourself to see whether you are convinced by his argument.

I’m not actually convinced that peer review is as biased as Hawkes asserts. I rather think that the strongest argument against  the scientific journal establishment  is the ruthless racketeering of the academic publishers that profit from it.  Still, I do think he has a point. Scientists who garner esteem and influence in the public domain through their work should be required to defend it our in the open to both scientists and non-scientists alike. I’m not saying that’s easy to do in the face of ill-informed or even illiterate criticism, but it is in my view a necessary price to pay, especially when the research is funded by the taxpayer.

It’s not that I think many scientists are involved in sinister activities, manipulating their data and fiddling their results behind closed doors, but that as long as there is an aura of secrecy it will always fuel the conspiracy theories on which the enemies of reason thrive. We often hear the accusation that scientists behave as if they are priests. I don’t think they do, but there are certainly aspects of scientific practice that make it appear that way, and the closed world of academic publishing is one of the things that desperately needs to be opened up.

For a start, I think we scientists should forget academic journals and peer review, and publish our results directly in open access repositories. In the old days journals were necessary to communicate scientific work. Peer review guaranteed a certain level of quality. But nowadays it is unnecessary. Good work will achieve visibility through the attention others give it. Likewise open scrutiny will be a far more effective way of identifying errors than the existing referee process. Some steps will have to be taken to prevent abuse of the access to databases and even then I suspect a great deal of crank papers will make it through. But in the long run, I strongly believe this is the only way that science can develop in the age of digital democracy.

But scrapping the journals is only part of the story. I’d also argue that all scientists undertaking publically funded research should be required to put their raw data in the public domain too. I would allow a short proprietary period after the experiments, observations or whatever form of data collection is involved. I can also see that ethical issues may require certain data to be witheld, such as the names of subjects in medical trials. Issues will also arise when research is funded commercially rather than by the taxpaper. However, I still maintain that full disclosure of all raw data should be the rule rather than the exception. After all, if it’s research that’s funded by the public, it is really the public that owns the data anyway.

In astronomy this is pretty much the way things operate nowadays, in fact. Maybe stargazers have a more romantic way of thinking about scientific progress than their more earthly counterparts, but it is quite normal – even obligatory for certain publically funded projects – for surveys to release all their data. I used to think that it was enough just to publish the final results, but I’ve become so distrustful of the abuse of statistics throughout the field that I think it is necessary for independent scientists to check every step of the analysis of every major result. In the past it was simply too difficult to publish large catalogues in a form that anyone could use, but nowadays that is simply no longer the case. Astronomers have embraced this reality, and it is liberated them.

To give a good example of the benefits of this approach, take the Wilkinson Microwave Anisotropy Probe (WMAP) which released full data sets after one, three, five and seven years of operation. Scores of groups around the world have done their best to find glitches in the data and errors in the analysis without turning up anything particularly significant. The standing of the WMAP team is all the higher for having done this, although I don’t know whether they would have chosen to had they not been required to do so under the terms of their funding!

In the world of astronomy research it’s not at all unusual to find data for the object or set of objects you’re interested in from a public database, or by politely asking another team if they wouldn’t mind sharing their results. And if you happen to come across a puzzling result you suspect might be erroneous and want to check the calculations, you just ask the author for the numbers and, generally speaking, they send the numbers to you. A disagreement may ensue about who is right and who is wrong, but that’s the way science is supposed to work.  Everything must be open to question. It’s often a chaotic process, but it’s a process all the same, and it is one that has servedus incredibly well.

I was quite surprised recently to learn that this isn’t the way other scientific disciplines operate at all. When I challenged the statistical analysis in a paper on neuroscience recently, my request to have a look at the data myself was greeted with a frosty refusal. The authors seemed to take it as a personal affront that anyone might have the nerve to question their study. I had no alternative but to go public with my doubts, and my concerns have never been satisfactorily answered. How many other examples are there wherein application of the scientific method has come to a grinding halt because of compulsive secrecy? Nobody likes to have their failings exposed in public, and I’m sure no scientists likes see an error pointed out, but surely it’s better to be seen to have made an error than to maintain a front that perpetuates the suspicion of malpractice?

Another, more topical, example concerns the University of East Anglia’s Climatic Research Unit which was involved in the Climategate scandal and which has apparently now decided that it wants to share its data. Fine, but I find it absolutely amazing that such centres have been able to get away with being so secretive in the past. Their behaviour was guaranteed to lead to suspicions that they had something to hide. The public debate about climate change may be noisy and generally ill-informed but it’s a debate we must have out in the open.

I’m not going to get all sanctimonious about `pure’ science nor am I going to question the motives of  individuals working in disciplines I know very little about. I would, however, say that from the outside it certainly appears that there is often a lot more going on in the world of academic research than the simple quest for knowledge.

Of course there are risks in opening up the operation of science in the way I’m suggesting. Cranks will probably proliferate, but we’ll no doubt get used to them- I’m a cosmologist and I’m pretty much used to them already! Some good work may find it a bit harder to be recognized. Lack of peer review may mean more erroneous results see the light of day. Empire-builders won’t like it much either, as a truly open system of publication will be a great leveller of reputations. But in the final analysis, the risk of sticking to our arcane practices is far higher. Public distrust will grow and centuries of progress may be swept aside on a wave of irrationality. If the price for avoiding that is to change our attitude to who owns our data, then it’s a price well worth paying.


Share/Bookmark

Cosmology on its beam-ends?

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , , on June 14, 2010 by telescoper

Interesting press release today from the Royal Astronomical Society about a paper (preprint version here) which casts doubt on whether the Wilkinson Microwave Anisotropy Probe supports the standard cosmological model to the extent that is generally claimed. Apologies if this is a bit more technical than my usual posts (but I like occasionally to pretend that it’s a science blog).

The abstract of the paper (by Sawangwit & Shanks) reads

Using the published WMAP 5-year data, we first show how sensitive the WMAP power spectra are to the form of the WMAP beam. It is well known that the beam profile derived from observations of Jupiter is non-Gaussian and indeed extends, in the W band for example, well beyond its 12.’6 FWHM core out to more than 1 degree in radius. This means that even though the core width corresponds to wavenumber l ~ 1800, the form of the beam still significantly affects the WMAP results even at l~200 which is the scale of the first acoustic peak. The difference between the beam convolved Cl; and the final Cl is ~ 70% at the scale of the first peak, rising to ~ 400% at the scale of the second.  New estimates of the Q, V and W-band beam profiles are then presented, based on a stacking analysis of the WMAP5 radio source catalogue and temperature maps. The radio sources show a significantly (3-4σ) broader beam profile on scales of 10′-30′ than that found by the WMAP team whose beam analysis is based on measurements of Jupiter. Beyond these scales the beam profiles from the radio sources are too noisy to give useful information. Furthermore, we find tentative evidence for a non-linear relation between WMAP and ATCA/IRAM 95 GHz source fluxes. We discuss whether the wide beam profiles could be caused either by radio source extension or clustering and find that neither explanation is likely. We also argue against the possibility that Eddington bias is affecting our results. The reasons for the difference between the radio source and the Jupiter beam profiles are therefore still unclear. If the radio source profiles were then used to define the WMAP beam, there could be a significant change in the amplitude and position of even the first acoustic peak. It is therefore important to identify the reasons for the differences between these two beam profile estimates.

The press release puts it somewhat more dramatically

New research by astronomers in the Physics Department at Durham University suggests that the conventional wisdom about the content of the Universe may be wrong. Graduate student Utane Sawangwit and Professor Tom Shanks looked at observations from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite to study the remnant heat from the Big Bang. The two scientists find evidence that the errors in its data may be much larger than previously thought, which in turn makes the standard model of the Universe open to question. The team publish their results in a letter to the journal Monthly Notices of the Royal Astronomical Society.

I dare say the WMAP team will respond in due course, but this paper spurred me to mention some work on this topic that was done by my friend (and former student) Lung-Yih Chiang. During his last visit to Cardiff we discussed this at great length and got very excited at one point when we thought we had discovered an error along the lines that the present paper claims. However, looking more carefully into it we decided that this wasn’t the case and we abandoned our plans to publish a paper on it.

Let me show you a few slides from a presentation that Lung-Yih gave to me a while ago. For a start here is the famous power-spectrum of the temperature fluctuations of the cosmic microwave background which plays an essential role in determining the parameters of the standard cosmology:

The position of the so-called “acoustic peak” plays an important role in determining the overall curvature of space-time on cosmological scales and the higher-order peaks pin down other parameters. However, it must be remembered that WMAP doesn’t just observe the cosmic microwave background. The signal it receives is heavily polluted by contamination from within our Galaxy and there is also significant instrumental noise.  To deal with this problem, the WMAP team exploit the five different frequency channels with which the probe is equipped, as shown in the picture below.

The CMB, being described by a black-body spectrum, has a sky temperature that doesn’t vary with frequency. Foreground emission, on the other hand, has an effective temperature that varies with frequency in way that is fairly well understood. The five available channels can therefore be used to model and subtract the foreground contribution to the overall signal. However, the different channels have different angular resolution (because they correspond to different wavelengths of radiation). Here are some sample patches of sky illustrating this

At each frequency the sky is blurred out by the “beam” of the WMAP optical system; the blurring is worse at low frequencies than at high frequencies. In order to do the foreground subtraction, the WMAP team therefore smooth all the frequency maps to have the same resolution, i.e. so the net effect of optical resolution and artificial smoothing produces the same overall blurring (actually 1 degree).  This requires accurate knowledge of the precise form of the beam response of the experiment to do it accurately. A rough example (for illustration only) is given in the caption above.

Now, here are the power spectra of the maps in each frequency channel

Note this is Cl not l(l+1)Cl as in the first plot of the spectrum. Now you see how much foreground there is in the data: the curves would lie on top of each other if the signal were pure CMB, i.e. if it did not vary with frequency. The equation at the bottom basically just says that the overall spectrum is a smoothed version of the CMB plus the foregrounds  plus noise. Note, crucially,  that the smoothing suppresses the interesting high-l wiggles.

I haven’t got space-time enough to go into how the foreground subtraction is carried out, but once it is done it is necessary to “unblur” the maps in order to see the structure at small angular scales, i.e. at large spherical harmonic numbers l. The initial process of convolving the sky pattern with a filter corresponds to multiplying the power-spectrum with a “window function” that decreases sharply at high l, so to deconvolve the spectrum one essentially has to divide by this window function to reinstate the power removed at high harmonics.

This is where it all gets very tricky. The smoothing applied is very close to the scale of the acoustic peaks so you have to do it very carefully to avoid introducing artificial structure in Cl or obliterating structure that you want to see. Moreover, a small error in the beam gets blown up in the deconvolution so one can go badly wrong in recovering the final spectrum. In other words, you need to know the beam very well to have any chance of getting close to the right answer!

The next picture gives a rough model for how much the “recovered” spectrum depends on the error produced by making even a small error in the beam profile which, for illustration only, is assumed to be Gaussian. It also shows how sensitive the shape of the deconvolved spectrum is to small errors in the beam.

Incidentally, the ratty blue line shows the spectrum obtained from a small patch of the sky rather than the whole sky. We were interested to see how much the spectrum varied across the sky so broke it up into square patches about the same size as those analysed by the Boomerang experiment. This turns out to be a pretty good way of getting the acoustic peak position but, as you can see, you lose information at low l (i.e. on scales larger than the patch).

The WMAP beam isn’t actually Gaussian – it differs quite markedly in its tails, which means that there’s even more cross-talk between different harmonic modes than in this example – but I hope you get the basic point. As Sawangwit & Shanks say, you need to know the beam very well to get the right fluctuation spectrum out. Move the acoustic peak around only slightly and all bets are off about the cosmological parameters and, perhaps, the evidence for dark energy and dark matter. Lung-Yih looked at the way the WMAP had done it and concluded that if their published beam shape was right then they had done a good job and there’s nothing substantially wrong with the results shown in the first graph.

Sawangwit & Shanks suggest the beam isn’t right so the recovered angular spectrum is suspect. I’ll need to look a bit more at the evidence they consider before commenting on that, although if anyone else has worked through it I’d be happy to hear from them through the comments box!

The Seven Year Itch

Posted in Bad Statistics, Cosmic Anomalies, The Universe and Stuff with tags , , , on January 27, 2010 by telescoper

I was just thinking last night that it’s been a while since I posted anything in the file marked cosmic anomalies, and this morning I woke up to find a blizzard of papers on the arXiv from the Wilkinson Microwave Anisotropy Probe (WMAP) team. These relate to an analysis of the latest data accumulated now over seven years of operation; a full list of the papers is given here.

I haven’t had time to read all of them yet, but I thought it was worth drawing attention to the particular one that relates to the issue of cosmic anomalies. I’ve taken the liberty of including the abstract here:

A simple six-parameter LCDM model provides a successful fit to WMAP data, both when the data are analyzed alone and in combination with other cosmological data. Even so, it is appropriate to search for any hints of deviations from the now standard model of cosmology, which includes inflation, dark energy, dark matter, baryons, and neutrinos. The cosmological community has subjected the WMAP data to extensive and varied analyses. While there is widespread agreement as to the overall success of the six-parameter LCDM model, various “anomalies” have been reported relative to that model. In this paper we examine potential anomalies and present analyses and assessments of their significance. In most cases we find that claimed anomalies depend on posterior selection of some aspect or subset of the data. Compared with sky simulations based on the best fit model, one can select for low probability features of the WMAP data. Low probability features are expected, but it is not usually straightforward to determine whether any particular low probability feature is the result of the a posteriori selection or of non-standard cosmology. We examine in detail the properties of the power spectrum with respect to the LCDM model. We examine several potential or previously claimed anomalies in the sky maps and power spectra, including cold spots, low quadrupole power, quadropole-octupole alignment, hemispherical or dipole power asymmetry, and quadrupole power asymmetry. We conclude that there is no compelling evidence for deviations from the LCDM model, which is generally an acceptable statistical fit to WMAP and other cosmological data.

Since I’m one of those annoying people who have been sniffing around the WMAP data for signs of departures from the standard model, I thought I’d comment on this issue.

As the abstract says, the  LCDM model does indeed provide a good fit to the data, and the fact that it does so with only 6 free parameters is particularly impressive. On the other hand, this modelling process involves the compression of an enormous amount of data into just six numbers. If we always filter everything through the standard model analysis pipeline then it is possible that some vital information about departures from this framework might be lost. My point has always been that every now and again it is worth looking in the wastebasket to see if there’s any evidence that something interesting might have been discarded.

Various potential anomalies – mentioned in the above abstract – have been identified in this way, but usually there has turned out to be less to them than meets the eye. There are two reasons not to get too carried away.

The first reason is that no experiment – not even one as brilliant as WMAP – is entirely free from systematic artefacts. Before we get too excited and start abandoning our standard model for more exotic cosmologies, we need to be absolutely sure that we’re not just seeing residual foregrounds, instrument errors, beam asymmetries or some other effect that isn’t anything to do with cosmology. Because it has performed so well, WMAP has been able to do much more science than was originally envisaged, but every experiment is ultimately limited by its own systematics and WMAP is no different. There is some (circumstantial) evidence that some of the reported anomalies may be at least partly accounted for by  glitches of this sort.

The second point relates to basic statistical theory. Generally speaking, an anomaly A (some property of the data) is flagged as such because it is deemed to be improbable given a model M (in this case the LCDM). In other words the conditional probability P(A|M) is a small number. As I’ve repeatedly ranted about in my bad statistics posts, this does not necessarily mean that P(M|A)- the probability of the model being right – is small. If you look at 1000 different properties of the data, you have a good chance of finding something that happens with a probability of 1 in a thousand. This is what the abstract means by a posteriori reasoning: it’s not the same as talking out of your posterior, but is sometimes close to it.

In order to decide how seriously to take an anomaly, you need to work out P(M|A), the probability of the model given the anomaly, which requires that  you not only take into account all the other properties of the data that are explained by the model (i.e. those that aren’t anomalous), but also specify an alternative model that explains the anomaly better than the standard model. If you do this, without introducing too many free parameters, then this may be taken as compelling evidence for an alternative model. No such model exists -at least for the time being – so the message of the paper is rightly skeptical.

So, to summarize, I think what the WMAP team say is basically sensible, although I maintain that rummaging around in the trash is a good thing to do. Models are there to be tested and surely the best way to test them is to focus on things that look odd rather than simply congratulating oneself about the things that fit? It is extremely impressive that such intense scrutiny over the last seven years has revealed so few oddities, but that just means that we should look even harder..

Before too long, data from Planck will provide an even sterner test of the standard framework. We really do need an independent experiment to see whether there is something out there that WMAP might have missed. But we’ll have to wait a few years for that.

So far it’s WMAP 7 Planck 0, but there’s plenty of time for an upset. Unless they close us all down.

Another take on cosmic anisotropy

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , on October 22, 2009 by telescoper

Yesterday we had a nice seminar here by Antony Lewis who is currently at Cambridge, but will be on his way to Sussex in the New Year to take up a lectureship there. I thought I’d put a brief post up here so I can add it to my collection of items concerning cosmic anomalies. I admit that I had missed the paper he talked about (by himself and Duncan Hanson) when it came out on the ArXiv last month, so I’m very glad his visit drew this to my attention.

What Hanson & Lewis did was to think of a number of simple models in which the pattern of fluctuations in the temperature of the cosmic microwave background radiation across the sky might have a preferred direction. They then construct optimal estimators for the parameters in these models (assuming the underlying fluctuations are Gaussian) and then apply these estimators to the data from the Wilkinson Microwave Anisotropy Probe (WMAP). Their subsequent analysis attempts to answer the question whether the data prefer these anisotropic models to the bog-standard cosmology which is statistically isotropic.

I strongly suggest you read their paper in detail because it contains a lot of interesting things, but I wanted to pick out one result for special mention. One of their models involves a primordial power spectrum that is intrinsically anisotropic. The model is of the form

P(\vec{ k})=P(k) [1+a(k)g(\vec{k})]

compared to the standard P(k), which does not depend on the direction of the wavevector. They find that the WMAP measurements strongly prefer this model to the standard one. Great! A departure from the standard cosmological model! New Physics! Re-write your textbooks!

Well, not really. The direction revealed by the best-choice parameter fit to the data is shown in the smoothed picture  (top). Underneath it are simulations of the sky predicted by their  model decomposed into an isoptropic part (in the middle) and an anisotropic part (at the bottom).

lewis2

You can see immediately that the asymmetry axis is extremely close to the scan axis of the WMAP satellite, i.e. at right angles to the Ecliptic plane.

This immediately suggests that it might not be a primordial effect at all but either (a) a signal that is aligned with the Ecliptic plane (i.e. something emanating from the Solar System) or (b) something arising from the WMAP scanning strategy. Antony went on to give strong evidence that it wasn’t primordial and it wasn’t from the Solar System. The WMAP satellite has a number of independent differencing assemblies. Anything external to the satellite should produce the same signal in all of them, but the observed signal varies markedly from one to another. The conclusion, then, is that this particular anomaly is largely generated by an instrumental systematic.

The best candidate for such an effect is that it is an artefact of a asymmetry in the beams of the two telescopes on the satellite. Since the scan pattern has a preferred direction, the beam profile may introduce a direction-dependent signal into the data. No attempt has been made to correct for this effect in the published maps so far, and it seems to me to be very likely that this is the root of this particular anomaly.

We will have to see the extent to which beam systematics will limit the ability of Planck to shed further light on this issue.

Follow

Get every new post delivered to your Inbox.

Join 4,205 other followers