Time for a grumpy early morning post while I drink my tea.
There’s an interesting post on the New Scientist blog site by that young chap Andrew Pontzen who works at Oxford University (in the Midlands). It’s on a topic that’s very pertinent to the ongoing debate about Open Access. One of the points the academic publishing lobby always makes is that Peer Review is essential to assure the quality of research. The publishers also often try to claim that they actually do Peer Review, which they don’t. That’s usual done, for free, by academics.
But the point Andrew makes is that we should also think about whether the form of Peer Review that journals undertake is any good anyway. Currently we submit our paper to a journal, the editors of which select one (or perhaps two or three) referees to decide whether it merits publication. We then wait – often many months – for a report and a decision by the Editorial Board.
But there’s also a free online repository called the arXiv which all astrophysics papers eventually appear on. Some researchers like to wait for the paper to be refereed and accepted before putting it on the arXiv, while others, myself included, just put it on the arXiv straight away when we submit it to the journal. In most cases one gets prompter and more helpful comments by email from people who read the paper on arXiv than from the referee(s).
Andrew questions why we trust the reviewing of a paper to one or two individuals chosen by the journal when the whole community could do the job quicker and better. I made essentially the same point in a post a few years ago:
I’m not saying the arXiv is perfect but, unlike traditional journals, it is, in my field anyway, indispensable. A little more investment, adding a comment facilities or a rating system along the lines of, e.g. reddit, and it would be better than anything we get academic publishers at a fraction of the cost. Reddit, in case you don’t know the site, allows readers to vote articles up or down according to their reaction to it. Restrict voting to registered users only and you have the core of a peer review system that involves en entire community rather than relying on the whim of one or two referees. Citations provide another measure in the longer term. Nowadays astronomical papers attract citations on the arXiv even before they appear in journals, but it still takes time for new research to incorporate older ideas.
In any case I don’t think the current system of Peer Review provides the Gold Standard that publishers claim it does. It’s probably a bit harsh to single out one example, but then I said I was feeling grumpy, so here’s something from a paper that we’ve been discussing recently in the cosmology group at Cardiff. The paper is by Gonzalez et al. and is called IDCS J1426.5+3508: Cosmological implications of a massive, strong lensing cluster at Z = 1.75. The abstract reads
The galaxy cluster IDCS J1426.5+3508 at z = 1.75 is the most massive galaxy cluster yet discovered at z > 1.4 and the first cluster at this epoch for which the Sunyaev-Zel’Dovich effect has been observed. In this paper we report on the discovery with HST imaging of a giant arc associated with this cluster. The curvature of the arc suggests that the lensing mass is nearly coincident with the brightest cluster galaxy, and the color is consistent with the arc being a star-forming galaxy. We compare the constraint on M200 based upon strong lensing with Sunyaev-Zel’Dovich results, finding that the two are consistent if the redshift of the arc is z > 3. Finally, we explore the cosmological implications of this system, considering the likelihood of the existence of a strongly lensing galaxy cluster at this epoch in an LCDM universe. While the existence of the cluster itself can potentially be accomodated if one considers the entire volume covered at this redshift by all current high-redshift cluster surveys, the existence of this strongly lensed galaxy greatly exacerbates the long-standing giant arc problem. For standard LCDM structure formation and observed background field galaxy counts this lens system should not exist. Specifically, there should be no giant arcs in the entire sky as bright in F814W as the observed arc for clusters at z \geq 1.75, and only \sim 0.3 as bright in F160W as the observed arc. If we relax the redshift constraint to consider all clusters at z \geq 1.5, the expected number of giant arcs rises to \sim 15 in F160W, but the number of giant arcs of this brightness in F814W remains zero. These arc statistic results are independent of the mass of IDCS J1426.5+3508. We consider possible explanations for this discrepancy.
Interesting stuff indeed. The paper has been accepted for publication by the Astrophysical Journal too.
Now look at the key result, Figure 3:
I’ll leave aside the fact that there aren’t any error bars on the points, and instead draw your attention to the phrase “The curves are spline interpolations between the data points”. For the red curve only two “data points” are shown; actually the points are from simulations, so aren’t strictly data, but that’s not the point. I would have expected an alert referee to ask for all the points needed to form the curve to be shown, and it takes more than two points to make a spline. Without the other point(s) – hopefully there is at least one more! – the reader can’t reproduce the analysis, which is what the scientific method requires, especially when a paper makes such a strong claim as this.
I’m guessing that the third point is at zero (which is at – ∞ on the log scale shown in the graph), but surely that must have an error bar on it, deriving from the limited simulation size?
If this paper had been put on a system like the one I discussed above, I think this would have been raised…Follow @telescoper