Archive for Planck

A Cosmic Microwave Background Dipole Puzzle

Posted in Cute Problems, The Universe and Stuff with tags , , , , , on October 31, 2016 by telescoper

The following is tangentially related to a discussion I had during a PhD examination last week, and I thought it might be worth sharing here to stimulate some thought among people interested in cosmology.

First here’s a picture of the temperature fluctuations in the cosmic microwave background from Planck (just because it’s so pretty).

planck_cmb

The analysis of these fluctuations yields a huge amount of information about the universe, including its matter content and spatial geometry as well as the form of primordial fluctuations that gave rise to galaxies and large-scale structure. The variations in temperature that you see in this image are small – about one-part in a hundred thousand – and they show that the universe appears to be close to isotropic (at least around us).

I’ll blog later on (assuming I find time) on the latest constraints on this subject, but for the moment I’ll just point out something that has to be removed from the above map to make it look isotropic, and that is the Cosmic Microwave Background Dipole. Here is a picture (which I got from here):

dipole_map

This signal – called a dipole because it corresponds to a simple 180 degree variation across the sky – is about a hundred times larger than the “intrinsic” fluctuations which occur on smaller angular scales and are seen in the first map. According to the standard cosmological framework this dipole is caused by our peculiar motion through the frame in which microwave background photons are distributed homogeneously and isotropically. Had we no peculiar motion then we would be “at rest” with respect to this CMB reference frame so there would be no such dipole. In the standard cosmological framework this “peculiar motion” of ours is generated by the gravitational effect of local structures and is thus a manifestation of the fact that our universe is not homogeneous on small scales; by “small” I mean on the scales of a hundred Megaparsecs or so. Anyway, if you’re interested in goings-on in the very early universe or its properties on extremely large scales the dipole is thus of no interest and, being so large, it is quite easy to subtract. That’s why it isn’t there in maps such as the Planck map shown above. If it had been left in it would swamp the other variations.

Anyway, the interpretation of the CMB dipole in terms of our peculiar motion through the CMB frame leads to a simple connection between the pattern shown in the second figure and the velocity of the observational frame: it’s a Doppler Effect. We are moving towards the upper right of the figure (in which direction photons are blueshifted, so the CMB looks a bit hotter in that direction) and away from the bottom left (whence the CMB photons are redshifted so the CMB appears a bit cooler). The amplitude of the dipole implies that the Solar System is moving with a velocity of around 370 km/s with respect to the CMB frame.

Now 370 km/s is quite fast, but it’s much smaller than the speed of light – it’s only about 0.12%, in fact – which means that one can treat this is basically a non-relativistic Doppler Effect. That means that it’s all quite straightforward to understand with elementary physics. In the limit that v/c<<1 the Doppler Effect only produces a dipole pattern of the type we see in the Figure above, and the amplitude of the dipole is ΔT/T~v/c because all terms of higher order in v/c are negligibly smallFurthermore in this case the dipole is simply superimposed on the primordial fluctuations but otherwise does not affect them.

My question to the reader, i.e. you,  is the following. Suppose we weren’t travelling at a sedate 370 km/s through the CMB frame but instead enter the world of science fiction and take a trip on a spacecraft that can travel close to the speed of light. What would this do to the CMB? Would we still just see a dipole, or would we see additional (relativistic) effects? If there are other effects, what would they do to the pattern of “intrinsic” fluctuations?

Comments and answers through the box below, please!

 

Should we worry about the Hubble Constant?

Posted in The Universe and Stuff with tags , , , , on July 27, 2016 by telescoper

One of the topics that came up in the discussion sessions at the meeting I was at over the weekend was the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter. Coincidentally, I found this old preprint while tidying up my office yesterday:

Cosmo_params

Things have changed quite a bit since 1979! Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:

Planck_parameters

The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68  km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.

Note however that a recent “direct” determination of the Hubble constant by Riess et al.  using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc. Had these two values been obtained in 1979 we wouldn’t have worried because the errors would have been much larger, but nowadays the measurements are much more precise and there does seem to be a hint of a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. On the other hand the history of Hubble constant determinations is one of results being quoted with very small “internal” errors that turned out to be much smaller than systematic uncertainties.

I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters. If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.

It’s quite interesting to think  how much we scientists tend to carry on despite the signs that things might be wrong. Take, for example, Newton’s Gravitational Constant, G. Measurements of this parameter are extremely difficult to do, but different experiments do seem to be in disagreement with each other. If Newtonian gravity turned out to be wrong that would indeed be extremely exciting, but I think it’s a wiser bet that there are uncontrolled experimental systematics. On the other hand there is a danger that we might ignore evidence that there’s something fundamentally wrong with our theory. It’s sometimes a difficult judgment how seriously to take experimental results.

Anyway, I don’t know what cosmologists think in general about this so there’s an excuse for a poll:

 

 

 

 

What does “Big Data” mean to you?

Posted in The Universe and Stuff with tags , , , , on April 7, 2016 by telescoper

On several occasions recently I’ve had to talk about Big Data for one reason or another. I’m always at a disadvantage when I do that because I really dislike the term.Clearly I’m not the only one who feels this way:

say-big-data-one-more-time

For one thing the term “Big Data” seems to me like describing the Ocean as “Big Water”. For another it’s not really just the how big the data set is that matters. Size isn’t everything, after all. There is much truth in Stalin’s comment that “Quantity has a quality all its own” in that very large data sets allow you to do things you wouldn’t even try with smaller ones, but it can be complexity rather than sheer size that also requires new methods of analysis.

Planck_CMB_large

The biggest event in my own field of cosmology in the last few years has been the Planck mission. The data set is indeed huge: the above map of the temperature pattern in the cosmic microwave background has no fewer than 167 million pixels. That certainly caused some headaches in the analysis pipeline, but I think I would argue that this wasn’t really a Big Data project. I don’t mean that to be insulting to anyone, just that the main analysis of the Planck data was aimed at doing something very similar to what had been done (by WMAP), i.e. extracting the power spectrum of temperature fluctuations:

Planck_power_spectrum_origIt’s a wonderful result of course that extends the measurements that WMAP made up to much higher frequencies, but Planck’s goals were phrased in similar terms to those of WMAP – to pin down the parameters of the standard model to as high accuracy as possible. For me, a real “Big Data” approach to cosmic microwave background studies would involve doing something that couldn’t have been done at all with a smaller data set. An example that springs to mind is looking for indications of effects beyond the standard model.

Moreover what passes for Big Data in some fields would be just called “data” in others. For example, the Atlas Detector on the  Large Hadron Collider  represents about 150 million sensors delivering data 40 million times per second. There are about 600 million collisions per second, out of which perhaps one hundred per second are useful. The issue here is then one of dealing with an enormous rate of data in such a way as to be able to discard most of it very quickly. The same will be true of the Square Kilometre Array which will acquire exabytes of data every day out of which perhaps one petabyte will need to be stored. Both these projects involve data sets much bigger and more difficult to handle that what might pass for Big Data in other arenas.

Books you can buy at airports about Big Data generally list the following four or five characteristics:

  1. Volume
  2. Velocity
  3. Variety
  4. Veracity
  5. Variability

The first two are about the size and acquisition rate of the data mentioned above but the others are more about qualitatively different matters. For example, in cosmology nowadays we have to deal with data sets which are indeed quite large, but also very different in form.  We need to be able to do efficient joint analyses of heterogeneous data structures with very different sampling properties and systematic errors in such a way that we get the best science results we can. Now that’s a Big Data challenge!

 

The Supervoid and the Cold Spot

Posted in Astrohype, Cosmic Anomalies, The Universe and Stuff with tags , , , , , on April 21, 2015 by telescoper

While I was away at the SEPnet meeting yesterday a story broke in the press broke about the discovery of a large underdensity in the distribution of galaxies. The discovery is described in a paper by Szapudi et al. in the journal Monthly Notices of the Royal Astronomical Society. The claim is that this structure in the galaxy distribution can account for the apresence of a mysterious cold spot in the cosmic microwave background, shown here (circled) in the map generated by Planck:

Planck_coldspot

I’ve posted about this feature myself here in the category Cosmic Anomalies.

The abstract of the latest paper is here:

We use the WISE-2MASS infrared galaxy catalogue matched with Pan-STARRS1 (PS1) galaxies to search for a supervoid in the direction of the cosmic microwave background (CMB) cold spot (CS). Our imaging catalogue has median redshift z ≃ 0.14, and we obtain photometric redshifts from PS1 optical colours to create a tomographic map of the galaxy distribution. The radial profile centred on the CS shows a large low-density region, extending over tens of degrees. Motivated by previous CMB results, we test for underdensities within two angular radii, 5°, and 15°. The counts in photometric redshift bins show significantly low densities at high detection significance, ≳5σ and ≳6σ, respectively, for the two fiducial radii. The line-of-sight position of the deepest region of the void is z ≃ 0.15–0.25. Our data, combined with an earlier measurement by Granett, Szapudi & Neyrinck, are consistent with a large Rvoid = (220 ± 50) h−1 Mpc supervoid with δm ≃ −0.14 ± 0.04 centred at z = 0.22 ± 0.03. Such a supervoid, constituting at least a ≃3.3σ fluctuation in a Gaussian distribution of the Λ cold dark matter model, is a plausible cause for the CS.

The result is not entirely new: it has been discussed at various conferences over the past year or so (e.g this one) but this is the first refereed paper showing details of the discovery.

This gives me the excuse to post this wonderful cartoon, the context of which is described here. Was that really in 1992? That was twenty years ago!

Anyway, I just wanted to make a few points about this because some of the press coverage has been rather misleading. I’ve therefore filed this one in the category Astrophype.

First, the “supervoid” structure that has been discovered is not a “void”, which would be a region completely empty of galaxies. As the paper makes clear it is less dramatic than that: it’s basically an underdensity of around 14% in the density of galaxies. It is (perhaps) the largest underdensity yet found on such a large scale – though that depends very much on how you define a void – but it is not in itself inconsistent with the standard cosmological framework. Such large underdensities are expected to be rare, but rare things do occur if you survey a large enough volume of the universe. Large overdensities also arise as statistical fluctuations in large volumes.

Second, and probably most importantly, although this “supervoid” is in the direction of the CMB Cold Spot it cannot on its own explain the Cold Spot; the claim in the abstract that it provides a plausible explanation of the cold spot is simply incorrect. A void can affect the measured temperature of the CMB through the Integrated Sachs-Wolfe effect: photons travelling through such a structure are redshifted as they travel through the underdense region, so the CMB looks cooler in the direction of the void. However, even optimistic calculations of the magnitude of the effect suggest that this particular “void” can only account for about 10% of the signal associated with the Cold Spot. This is a reasonably significant contribution but it does not account for the signal on its own.

This is not to say however that it is irrelevant. It could well be that the supervoid actually sits in front of a region of the CMB sky that was already cold, as a result of a primordial fluctuation rather than a line-of-sight effect. Such an effect could well arise by chance, at least with some probability. If the original perturbation were a “3σ” temperature fluctuation then the additional effect of the supervoid would turn it into a 3.3σ effect. Since this pushes the event further out into the tail of the probability distribution it makes a reasonably uncommon feature look  less probable. Because the tail of a Gaussian distribution drops off very quickly this has quite a large effect on the probability. For example, a fluctuation of 3.3σ or greater has a probability of 0.00048 whereas one of 3.0σ has a probability of 0.00135, about a factor of 2.8 larger. That’s an effect, but not a large one.

In summary, I think the discovery of this large underdensity is indeed interesting but it is not a plausible explanation for the CMB Cold Spot. Not, that is, unless there’s some new physical process involved in the propagation of light that we don’t yet understand.

Now that would be interesting…

Planck Update

Posted in The Universe and Stuff with tags , , , , on February 5, 2015 by telescoper

Just time for a very quick post today to pass on thhe news that most of the 2015 crop of papers from the Planck mission have now been released and are available to download here. You can also find some related data products here.

I haven’t had time to look at these in any detail myself, but my attention was drawn (in the light of the recently-released combined analysis of Planck and Bicpe2/Keck data) to the constraints on inflationary cosmological models shown in this figure:

inflation

It seems that the once-popular (because it is simple) m^2 \phi^2 model of inflation is excluded at greater than 99% confidence…

Feel free to add reactions to any of the papers in the new release via the comments box!

The BICEP2 Bubble Bursts…

Posted in The Universe and Stuff with tags , , , , on January 30, 2015 by telescoper

I think it’s time to break the worst-kept secret in cosmology, concerning the claimed detection of primordial gravitational waves by the BICEP2 collaboration that caused so much excitement last year; see this blog, passim. If you recall, the biggest uncertainty in this result derived from the fact that it was made at a single frequency, 150 GHz, so it was impossible to determine the spectrum of the signal. Since dust in our own galaxy emits polarized light in the far-infrared there was no direct evidence to refute the possibility that this is what BICEP2 had detected. The indirect arguments presented by the BICEP2 team (that there should be very little dust emission in the region of the sky they studied) were challenged, but the need for further measurements was clear.

Over the rest of last year, the BICEP2 team collaborated with the consortium working on the Planck satellite, which has measurements over the whole sky at a wide range of frequencies. Of particular relevance to the BICEP2 controversy are the Planck mesurements at such high frequency that they are known to be dominated by dust emission, specifically the 353 GHz channel. Cross-correlating these data with the BICEP2 measurements (and also data from the Keck Array which is run by the same team) should allow the identification of that part of the BICEP2 signal that is due to dust emission to be isolated and subtracted. What’s left would be the bit that’s interesting for cosmology. This is the work that has been going on, the results of which will officially hit the arXiv next week.

However, news has been leaking out over the last few weeks about what the paper will say. Being the soul of discretion I decided not to blog about these rumours. However, yesterday I saw the killer graph had been posted so I’ve decided to share it here:

cross-correlation

The black dots with error bars show the original BICEP/Keck “detection” of B-mode polarization which they assumed was due to primordial gravitational waves. The blue dots with error bars show the results after subtracting the correlated dust component. There is clearly a detection of B-mode polarization. However, the red curve shows the B-mode polarization that’s expected to be generated not by primordial gravitational waves but by gravitational lensing; this signal is already known. There’s a slight hint of an excess over the red curve at multipoles of order 200, but it is not statistically significant. Note that the error bars are larger when proper uncertainties are folded in.

Here’s a quasi-official statement of the result (orginall issued in French) that has been floating around on Twitter:

BICEP_null

To be blunt, therefore, the BICEP2 measurement is a null result for primordial gravitational waves. It’s by no means a proof that there are no gravitational waves at all, but it isn’t a detection. In fact, for the experts, the upper limit on the tensor-to-scalar ratio  R from this analysis is R<0.13 at 95% confidences there’s actually till room for a sizeable contribution from gravitational waves, but we haven’t found it yet.

The search goes on…

UPDATE: As noted below in the comments, the actual paper has now been posted online here along with supplementary materials. I’m not surprised as the cat is already well and truly out of the bag, with considerable press interest, some of it driving traffic here!

UPDATE TO THE UPDATE: There’s a news item in Physics World and another in Nature News about this, both with comments from me and others.

Planck Talks Online!

Posted in The Universe and Stuff with tags , , , , , on December 11, 2014 by telescoper

After yesterday’s frivolity, I return to community service mode today with a short post before a series of end-of-term meetings.

You may recall that not long ago  I posted an item about a meeting in Ferrara which started on 1st December and which  concerned results from the Planck satellite. Well, although the number of new results was disappointingly limited, all the talks given at that meeting are now available online here. Not all of the talks are about new Planck results, and some of those that do are merely tasters of things that will be more completely divulged in due course, but there is still a lot of interesting material there so I recommend cosmology types have a good look through. Any comments would be welcome through the usual channel below.

I’ll take this opportunity to pass on another couple of related items. First is that there is another meeting on Planck, in Paris next week. Coincidentally, I will be in Paris on Monday and Tuesday for a completely unrelated matter (of which more anon) but I will try to keep up with the cosmology business via Twitter etc and pass on whatever I can pick up.

The other bit of news is that there is to be a press conference on December 22nd at which I’m led to believe the outcome of the joint analysis of CMB polarization by Planck and BICEP2 will be unveiled. Now that will be interesting, so stay tuned!

Oh, and my poll on this subject is still open: