Archive for New Scientist

Nailing Cosmological Jelly to the Wall

Posted in The Universe and Stuff with tags , , on November 28, 2016 by telescoper

When asked to provide comments for a recent piece about cosmology in New Scientist, all I could come up with was the quote in the following excerpt:

But no measurement will rule out inflation entirely, because it doesn’t make specific predictions. “There is a huge space of possible inflationary theories, which makes testing the basic idea very difficult,” says Peter Coles at Cardiff University, UK. “It’s like nailing jelly to the wall.”

Certain of my colleagues have cast doubt on whether I am qualified to comment on the nailing of jelly to the wall, so I feel obliged to share the results of my highly successful research into this in the form of the following photograph:

I regret that I was unable to find any Dark Jelly, so had to settle for the more familiar baryonic type. Also, for the record, I should point out that what is shown is actually jelly concentrate. A similar experiment with the more normal diluted form of jelly was somewhat less successful.

I hope this clarifies the situation.

The Mystery of Cosmic Magnetism

Posted in The Universe and Stuff with tags , , , , , , on May 13, 2013 by telescoper

I came across an article in New Scientist recently on the topic of cosmological magnetism. The piece is about an article by Leonardo Campanelli, which is available on the arXiv and which is apparently due to be published in Physical Review Letters. So it must be right.

Here’s the abstract

We calculate, in the free Maxwell theory, the renormalized quantum vacuum expectation value of the two-point magnetic correlation function in de Sitter inflation. We find that quantum magnetic fluctuations remain constant during inflation instead of being washed out adiabatically, as usually assumed in the literature. The quantum-to-classical transition of super-Hubble magnetic modes during inflation, allow us to treat the magnetic field classically after reheating, when it is coupled to the primeval plasma. The actual magnetic field is scale independent and has an intensity of few \times 10^(-12) G if the energy scale of inflation is few \times 10^(16) GeV. Such a field account for galactic and galaxy cluster magnetic fields.

So why is this interesting? Let me explain….

If you’re stuck for a question to ask at the end of an astronomy seminar and don’t want to reveal the fact that you were asleep for most of it, there are some general questions that you can nearly always ask regardless of the topic of the talk without appearing foolish. A few years ago, “how would the presence of dust affect your conclusions?” was quite a good one, but the danger these days is that with the development of far-infrared and submillimetre instrumentation and the proliferation of people using it, this could actually have been the topic of the talk you just dozed through. However, no technological advances have threatened the viability of another old stalwart: “What about magnetic fields?”.

In theory, galaxies condense out of the Big Bang as lumps of dark matter. Seeded by primordial density fluctuations and amplified by the action of gravity, these are supposed to grow in a hierarchical, bottom-up fashion with little blobs forming first and then merging into larger objects. The physics of this process is relatively simple (at least if the dark matter is cold) as it involves only gravity.

But, by definition, the dark matter can’t be seen. At least not directly, though its presence can be inferred indirectly by dynamical measurements and gravitational lensing. What astronomers generally see is starlight, although it often arrives at the telescope in an unfamiliar part of the spectrum owing to the redshifting effect of the expansion of the Universe. The stars in galaxies sit inside the blobs of dark matter, which are usually called “haloes” although blobs is a better name. In art the whole purpose of a halo is that you can see it.

How stars form is a very complicated question to answer even when you’re asking about nearby stellar nurseries like the Orion Nebula. The basic idea is that a gas cloud cools and contracts, radiating away energy until it gets sufficiently hot that nuclear burning switches on and pressure is generated that can oppose further collapse. The early stages of this processs, though, involve very many imponderables. Star formation doesn’t just involve gravity but lots of other processes, including additional volumes of Landau & Lifshitz, such as hydrodynamics, radiative transfer and, yes, magnetic fields. Naively, despite the complicated physics, it might still be imagined that stars form in the little blobs of dark matter first and then gradually get incorporated in larger objects.

Unfortunately, it is becoming increasingly obvious that this naive picture doesn’t quite work. Deep surveys of galaxies suggest that the most massive galaxies formed their stars quite early in the Big Bang and have been relatively quiescent since then, while smaller objects contain younger stars. In other words, pretty much the opposite of what one might have thought. This phenomenon (known as “downsizing”) suggests that something inhibits star formation early on in all but the largest of the largest haloes. It could be that powerful feedback from activity in the nuclear regions associated with a central black hole might do this, or it could be something a little less exotic such as stellar winds. Or it could be that the whole scheme is wrong in a more fundamental way. I personally wouldn’t go so far as to throw out the whole framework, as it has scored many successes, but it is definitely an open question what is going on.

A paper  in Nature a few years ago by Art Wolfe and collaborators revealed the presence of an enormously strong magnetic field in a galaxy at the relatively high redshift of 0.692. Actually it’s about 84 microGauss. OK, so this is just one object but the magnetic field in it is remarkably strong. It could be a freak occurrence resulting from some kind of shock or bubble, but it does seem to fit in a pattern in which young galaxies generally seem to have much higher magnetic fields than previously expected. Obviously we need to know how many more such magnetic monsters are lurking out there.

So why are these results so surprising? Didn’t we already know galaxies have magnetic fields in them?

Well, yes we did. The Milky Way has a magnetic field with a strength of about 10 microGauss, much lower than that discovered by Wolfe et al. But the point is that if we understand them properly, galactic magnetic fields are supposed to be have been much lower in the past than they are now. The standard theoretical picture is that a (tiny) initial seed field is amplified by a kind of dynamo operating by virtue of the strong differential rotation in disk galaxies. This makes the field grow exponentially with time so that only a few rotations of the galaxy are needed to make a large field out of a very small one. Eventually this dynamo probably quenches when the field has an energy density comparable to the gas in the galaxy (which is roughly the situation we find in our own Galaxy).

Hopefully you now see the problem. If the field is being wound up quickly then younger galaxies (those whose light comes to us from a long way away) should have much smaller magnetic fields than nearby ones. But they don’t seem to behave in this way.

A few years ago, I wrote a paper about a model in which the galactic fields weren’t produced by a dynamo but were primordial in origin and quite large from the start. If that’s the case then the magnetic field need not evolve as quickly as it needs to if the initial field is very tiny.

The problem is that it has previously been thought very difficult for any cosmological model involving inflation to generate a significant primordial magnetic field without invoking very exotic physics, such as breaking the conformal invariance of electrodynamics (which would mean, among other things, giving the photon a rest mass).

The interesting thing about Campanelli’s paper is that it suggests a straightforwardmechanism for inflation to generate interesting magnetic phenomena. I’m not an expert on the techniques used in this paper, so can’t comment on the accuracy of the calculations. I’d be very grateful for any comments on this, actually. Me, I’m an old fogey who’s very suspicious of anything that relies too heavily on renormalization. I do however agree with Larry Widrow, quoted in the New Scientist piece.

But even if primordial magnetic fields can be generated by inflation, their impact on the origin and evolution of galaxies and other cosmic structures remains unsolved. Although we know magnetism exists, it is notoriously difficult to understand its behaviour when it is coupled to all the other messy things we have to deal with in astrophysics. It’s a kind of polar opposite of dark matter, which we don’t know (for sure) exists but which only acts through gravity, so its behaviour is easier to model. This is the main reason why cosmological theorists prefer to think about dark matter rather than magnetic fields. I’d hazard a guess that this is one problem that won’t be resolved soon either. Things are complicated enough already!

It is also worth considering the possibility that magnetic fields might play a role in moderating the processes by which gas turns into stars within protogalaxies. At the very least, a magnetic field generates stresses that influence the onset of collapse. Although the evidence is mounting that they may be important, it is still by no means obvious that magnetic fields do provide the required missing link between dark matter haloes and stars. On the other hand, we now have fewer reasons for ignoring them.

Clusters, Splines and Peer Review

Posted in Bad Statistics, Open Access, The Universe and Stuff with tags , , , , , on June 26, 2012 by telescoper

Time for a grumpy early morning post while I drink my tea.

There’s an interesting post on the New Scientist blog site by that young chap Andrew Pontzen who works at Oxford University (in the Midlands). It’s on a topic that’s very pertinent to the ongoing debate about Open Access. One of the points the academic publishing lobby always makes is that Peer Review is essential to assure the quality of research. The publishers also often try to claim that they actually do Peer Review, which they don’t. That’s usual done, for free, by academics.

But the point Andrew makes is that we should also think about whether the form of Peer Review that journals undertake is any good anyway.  Currently we submit our paper to a journal, the editors of which select one (or perhaps two or three) referees to decide whether it merits publication. We then wait – often many months – for a report and a decision by the Editorial Board.

But there’s also a free online repository called the arXiv which all astrophysics papers eventually appear on. Some researchers like to wait for the paper to be refereed and accepted before putting it on the arXiv, while others, myself included, just put it on the arXiv straight away when we submit it to the journal. In most cases one gets prompter and more helpful comments by email from people who read the paper on arXiv than from the referee(s).

Andrew questions why we trust the reviewing of a paper to one or two individuals chosen by the journal when the whole community could do the job quicker and better. I made essentially the same point in a post a few years ago:

I’m not saying the arXiv is perfect but, unlike traditional journals, it is, in my field anyway, indispensable. A little more investment, adding a comment facilities or a rating system along the lines of, e.g. reddit, and it would be better than anything we get academic publishers at a fraction of the cost. Reddit, in case you don’t know the site, allows readers to vote articles up or down according to their reaction to it. Restrict voting to registered users only and you have the core of a peer review system that involves en entire community rather than relying on the whim of one or two referees. Citations provide another measure in the longer term. Nowadays astronomical papers attract citations on the arXiv even before they appear in journals, but it still takes time for new research to incorporate older ideas.

In any case I don’t think the current system of Peer Review provides the Gold Standard that publishers claim it does. It’s probably a bit harsh to single out one example, but then I said I was feeling grumpy, so here’s something from a paper that we’ve been discussing recently in the cosmology group at Cardiff. The paper is by Gonzalez et al. and is called IDCS J1426.5+3508: Cosmological implications of a massive, strong lensing cluster at Z = 1.75. The abstract reads

The galaxy cluster IDCS J1426.5+3508 at z = 1.75 is the most massive galaxy cluster yet discovered at z > 1.4 and the first cluster at this epoch for which the Sunyaev-Zel’Dovich effect has been observed. In this paper we report on the discovery with HST imaging of a giant arc associated with this cluster. The curvature of the arc suggests that the lensing mass is nearly coincident with the brightest cluster galaxy, and the color is consistent with the arc being a star-forming galaxy. We compare the constraint on M200 based upon strong lensing with Sunyaev-Zel’Dovich results, finding that the two are consistent if the redshift of the arc is  z > 3. Finally, we explore the cosmological implications of this system, considering the likelihood of the existence of a strongly lensing galaxy cluster at this epoch in an LCDM universe. While the existence of the cluster itself can potentially be accomodated if one considers the entire volume covered at this redshift by all current high-redshift cluster surveys, the existence of this strongly lensed galaxy greatly exacerbates the long-standing giant arc problem. For standard LCDM structure formation and observed background field galaxy counts this lens system should not exist. Specifically, there should be no giant arcs in the entire sky as bright in F814W as the observed arc for clusters at  z \geq 1.75, and only \sim 0.3 as bright in F160W as the observed arc. If we relax the redshift constraint to consider all clusters at z \geq 1.5, the expected number of giant arcs rises to \sim 15 in F160W, but the number of giant arcs of this brightness in F814W remains zero. These arc statistic results are independent of the mass of IDCS J1426.5+3508. We consider possible explanations for this discrepancy.

Interesting stuff indeed. The paper has been accepted for publication by the Astrophysical Journal too.

Now look at the key result, Figure 3:

I’ll leave aside the fact that there aren’t any error bars on the points, and instead draw your attention to the phrase “The curves are spline interpolations between the data points”. For the red curve only two “data points” are shown; actually the points are from simulations, so aren’t strictly data, but that’s not the point. I would have expected an alert referee to ask for all the points needed to form the curve to be shown, and it takes more than two points to make a spline.  Without the other point(s) – hopefully there is at least one more! – the reader can’t reproduce the analysis, which is what the scientific method requires, especially when a paper makes such a strong claim as this.

I’m guessing that the third point is at zero (which is at – ∞ on the log scale shown in the graph), but surely that must have an error bar on it, deriving from the limited simulation size?

If this paper had been put on a system like the one I discussed above, I think this would have been raised…

Dark Squib

Posted in Bad Statistics, Science Politics, The Universe and Stuff with tags , on December 19, 2009 by telescoper

After today’s lengthy pre-Christmas traipse around Cardiff in the freezing cold, I don’t think I can summon up the energy for a lengthy post today. However, today’s cryogenic temperatures did manage to remind me that I hadn’t closed the book on a previous story about rumours of a laboratory detection of dark matter by the experiment known as CDMS. The main rumour – that there was going to be a paper in Nature reporting the definite detection of dark matter particles – turned out to be false, but there was a bit of truth after all, in that they did put out a paper yesterday (18th December, the date that the original rumour suggested their paper would come out).  There’s also an executive summary of the results here.

It turns out that the experiment has seen two events that might, just might, be the Weakly Interacting Massive Particles (WIMPs) that are most theorists favoured candidate for cold dark matter. However, they might also be due to background events generated by other stray particles getting into the works. It’s impossible to tell at this stage whether the signal is real or not. Based on the sort of naive  frequentist statistical treatment of the data that for some reason is what particle physicists seem to prefer, there’s a 23% chance of their signal being background rather than dark matter. In other words, it’s about a one-sigma detection. In fact, if you factor in the possibility of a systematic error in the background counts – these are very difficult things to calibrate precisely – then the significance of the result decreases even further. And if you do it all properly, in a Bayesian way with an appropriate prior then the most probable result is no detection. Andrew Jaffe gives some details on his blog.

There is no universally accepted criterion for what constitutes a definite detection, but I’ve been told recently by the editor of Nature himself that if it’s less than 3-sigma (a probability of about 1% of it arising) then they’re unlikely to publish it. If it’s 2-sigma (5%) then it’s interesting, but not conclusive, but at 1-sigma it’s not worth writing home about never mind writing a press release.

I should  add that none of their results has yet been subject to peer review either. I can only guess that CDMS must be undergoing a funding review pretty soon and wanted to use the media to show it was producing the goods. I can’t say I’m impressed with these antics, and I doubt if the reviewers will be either.

Unfortunately, the fact that this is all so inconclusive from a scientific point of view hasn’t stopped various organs getting hold of the wrong end of the stick and starting to beat about the bush with it. New Scientist‘s Twitter feed screamed

Clear signal of dark matter detected in Minnesota!

although the article itself was a bit better informed. The Guardian ran a particularly poor story,  impressive only in the way it crammed so many misconceptions into such a short piece.

This episode takes me back to a theme I’ve touched on many times on this blog, which is that scientific results are very rarely black-and-white and they have to be treated carefully in appropriate probabilistic terms. Unfortunately, the media and the public have a great deal of difficulty understanding the subtleties of this and what gets across in the public domain can be either garbled or downright misleading. Most often in science the correct answer isn’t “true” or “false” but somewhere in between.

Of course, with more measurements, better statistics and stronger control of systematics this CDMS result may well turn into a significant detection. If it does then it will be a great scientific breakthrough and they’ll have my congratulations straight away, tempered with a certain amount of sadness that there will be no UK competitors in the race owing to our recent savage funding cuts. But we’re not there yet. So far, it’s just a definite maybe.