Archive for Peer Review

Research Hive on Open Access

Posted in Open Access, The Universe and Stuff with tags , , , , on March 21, 2014 by telescoper

Near the end of a week that has been both exciting and exhausting, I had the opportunity to take part in a seminar on Open Access publishing. I agreed to do this last year sometime, and only remembered that it was today because I got an email reminder a couple of days ago! Anyway it was nice to have an excuse to visit the iconic Library of the University of Sussex for this event.

Fortunately, as things turned out, I had plenty of topical material to draw on for inspiration and spent some time discussion the possibilities of community peer review with reference with what’s been happening with BICEP2. Here’s me in the middle of the talk on that very subject showing the Live Discussion Facebook page:

Hive

I shared the bill with Rupert Gatti from Open House Press which publishes mainly in the Arts and Humanities area; generally speaking these disciplines are a long way behind astrophysics in terms of their readiness for the age of Open Access but I think change across all academia is inevitable.

For those of you interested I realize that an update on the Open Journal For Astrophysics is long overdue. I’ve just been too busy with other things to devote much time to it. I do hope to have further news very soon…

Elsevierballs

Posted in Open Access with tags , , on December 16, 2012 by telescoper

telescoper:

Have you heard all the stories about the carefully-managed system of peer review that justifies the exorbitant cost of Elsevier journals? Then read this…

Originally posted on Retraction Watch:

elsevierFor several months now, we’ve been reporting on variations on a theme: Authors submitting fake email addresses for potential peer reviewers, to ensure positive reviews. In August, for example, we broke the story of a Hyung-In Moon, who has now retracted 24 papers published by Informa because he managed to do his own peer review.

Now, Retraction Watch has learned that the Elsevier Editorial System (EES) was hacked sometime last month, leading to faked peer reviews and retractions — although the submitting authors don’t seem to have been at fault. As of now, eleven papers by authors in China, India, Iran, and Turkey have been retracted from three journals.

Here’s one of two identical notices that have just run in Optics & Laser Technology, for two unconnectedpapers:

View original 556 more words

Clusters, Splines and Peer Review

Posted in Bad Statistics, Open Access, The Universe and Stuff with tags , , , , , on June 26, 2012 by telescoper

Time for a grumpy early morning post while I drink my tea.

There’s an interesting post on the New Scientist blog site by that young chap Andrew Pontzen who works at Oxford University (in the Midlands). It’s on a topic that’s very pertinent to the ongoing debate about Open Access. One of the points the academic publishing lobby always makes is that Peer Review is essential to assure the quality of research. The publishers also often try to claim that they actually do Peer Review, which they don’t. That’s usual done, for free, by academics.

But the point Andrew makes is that we should also think about whether the form of Peer Review that journals undertake is any good anyway.  Currently we submit our paper to a journal, the editors of which select one (or perhaps two or three) referees to decide whether it merits publication. We then wait – often many months – for a report and a decision by the Editorial Board.

But there’s also a free online repository called the arXiv which all astrophysics papers eventually appear on. Some researchers like to wait for the paper to be refereed and accepted before putting it on the arXiv, while others, myself included, just put it on the arXiv straight away when we submit it to the journal. In most cases one gets prompter and more helpful comments by email from people who read the paper on arXiv than from the referee(s).

Andrew questions why we trust the reviewing of a paper to one or two individuals chosen by the journal when the whole community could do the job quicker and better. I made essentially the same point in a post a few years ago:

I’m not saying the arXiv is perfect but, unlike traditional journals, it is, in my field anyway, indispensable. A little more investment, adding a comment facilities or a rating system along the lines of, e.g. reddit, and it would be better than anything we get academic publishers at a fraction of the cost. Reddit, in case you don’t know the site, allows readers to vote articles up or down according to their reaction to it. Restrict voting to registered users only and you have the core of a peer review system that involves en entire community rather than relying on the whim of one or two referees. Citations provide another measure in the longer term. Nowadays astronomical papers attract citations on the arXiv even before they appear in journals, but it still takes time for new research to incorporate older ideas.

In any case I don’t think the current system of Peer Review provides the Gold Standard that publishers claim it does. It’s probably a bit harsh to single out one example, but then I said I was feeling grumpy, so here’s something from a paper that we’ve been discussing recently in the cosmology group at Cardiff. The paper is by Gonzalez et al. and is called IDCS J1426.5+3508: Cosmological implications of a massive, strong lensing cluster at Z = 1.75. The abstract reads

The galaxy cluster IDCS J1426.5+3508 at z = 1.75 is the most massive galaxy cluster yet discovered at z > 1.4 and the first cluster at this epoch for which the Sunyaev-Zel’Dovich effect has been observed. In this paper we report on the discovery with HST imaging of a giant arc associated with this cluster. The curvature of the arc suggests that the lensing mass is nearly coincident with the brightest cluster galaxy, and the color is consistent with the arc being a star-forming galaxy. We compare the constraint on M200 based upon strong lensing with Sunyaev-Zel’Dovich results, finding that the two are consistent if the redshift of the arc is  z > 3. Finally, we explore the cosmological implications of this system, considering the likelihood of the existence of a strongly lensing galaxy cluster at this epoch in an LCDM universe. While the existence of the cluster itself can potentially be accomodated if one considers the entire volume covered at this redshift by all current high-redshift cluster surveys, the existence of this strongly lensed galaxy greatly exacerbates the long-standing giant arc problem. For standard LCDM structure formation and observed background field galaxy counts this lens system should not exist. Specifically, there should be no giant arcs in the entire sky as bright in F814W as the observed arc for clusters at  z \geq 1.75, and only \sim 0.3 as bright in F160W as the observed arc. If we relax the redshift constraint to consider all clusters at z \geq 1.5, the expected number of giant arcs rises to \sim 15 in F160W, but the number of giant arcs of this brightness in F814W remains zero. These arc statistic results are independent of the mass of IDCS J1426.5+3508. We consider possible explanations for this discrepancy.

Interesting stuff indeed. The paper has been accepted for publication by the Astrophysical Journal too.

Now look at the key result, Figure 3:

I’ll leave aside the fact that there aren’t any error bars on the points, and instead draw your attention to the phrase “The curves are spline interpolations between the data points”. For the red curve only two “data points” are shown; actually the points are from simulations, so aren’t strictly data, but that’s not the point. I would have expected an alert referee to ask for all the points needed to form the curve to be shown, and it takes more than two points to make a spline.  Without the other point(s) – hopefully there is at least one more! – the reader can’t reproduce the analysis, which is what the scientific method requires, especially when a paper makes such a strong claim as this.

I’m guessing that the third point is at zero (which is at – ∞ on the log scale shown in the graph), but surely that must have an error bar on it, deriving from the limited simulation size?

If this paper had been put on a system like the one I discussed above, I think this would have been raised…

A Poll about Peer Review

Posted in Science Politics with tags , , , on September 13, 2011 by telescoper

Anxious not to let the momentum dissipate about the discussion of scientific publishing, I thought I’d try a quick poll to see what people think about the issue of peer review. In my earlier posts I’ve advanced the view that, at least in the subject I work in (astrophysics), peer review achieves very little. Given that it is also extremely expensive when done by traditional journals. I think it could be replaced by a kind of crowd-sourcing, in which papers are put on an open-access archive or repository of some sort, and can then be commented upon by the community and from where they can be cited by other researchers. If you like, a sort of “arXiv plus”. Good papers will attract attention, poor ones will disappear. Such a system also has the advantage of guaranteeing open public access to research papers (although not necessarily to submission, which would have to be restricted to registered users only).

However, this is all just my view and I have no idea really how strongly others rate  the current system of peer review. The following poll is not very scientific, but ‘ve tried to include a reasonably representative range of views from “everything’s OK – let’s keep the current system” to the radical suggestion I make above.

Of course, if you have other views about peer review or academic publishing generally, please feel free to post them through the comments box.

Uninformed, Unhinged, and Unfair — The Monbiot Rant (via The Scholarly Kitchen)

Posted in Uncategorized with tags , , , , , , , , on September 2, 2011 by telescoper

I had to force myself to use the “Like” option on WordPress on this one, because that’s the only way to reblog posts….

This supercilious item is an attempt to counter a polemical piece in the Grauniad recently by George Monbiot. That article was about the extortionate cost and general uselessness of the so-called Learned Journals, i.e. precisely the Academic Journal Racket I’ve blogged about previously. I agree with most of what Monbiot says.

You can tell from the tone of the opening paragraph that this rejoinder doesn’t present a coherent argument because it launches straight into invective. And notice too that this from an academic publisher, so it’s hardly unbiased….

Nevertheless I thought I’d reblog this in the interest of balance. Indeed, if the best arguments for retaining the monstrous expense of “scholarly” journals are those presented here then it’s just a question of time before real scholars see them for what they are and get rid of them.

Come the revolution, next in line after the bankers….*

*For the benefit of the entirely humourless amongst you, let me stress that I am not advocating armed revolution, summary execution or any other form of violence against the academic publishing industry. This line is what we in my country call “a joke”.

Uninformed, Unhinged, and Unfair -- The Monbiot Rant I tried to ignore it. It deserved to be ignored — an ill-informed activist with academic aspirations using the Guardian as a pulpit to deliver a tiresome sermon filled with intentional misunderstandings, misinformation, and misapprehensions about academic publishing. It deserved to be ignored. Predictably, it caught fire in the blogosphere, on Twitter, and on Facebook. And now I feel compelled to jump into the fray. After all, the only coherent … Read More

via The Scholarly Kitchen

Publish or be Damned

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , , on August 23, 2010 by telescoper

For tonight’s post I thought I’d compose a commentary on a couple of connected controversies suggested by an interestingly provocative piece by Nigel Hawkes in the Independent this weekend entitled Peer Review journals aren’t worth the paper they’re written on. Here is an excerpt:

The truth is that peer review is largely hokum. What happens if a peer-reviewed journal rejects a paper? It gets sent to another peer-reviewed journal a bit further down the pecking order, which is happy to publish it. Peer review seldom detects fraud, or even mistakes. It is biased against women and against less famous institutions. Its benefits are statistically insignificant and its risks – academic log-rolling, suppression of unfashionable ideas, and the irresistible opportunity to put a spoke in a rival’s wheel – are seldom examined.

In contrast to many of my academic colleagues I largely agree with Nigel Hawkes, but I urge you to read the piece yourself to see whether you are convinced by his argument.

I’m not actually convinced that peer review is as biased as Hawkes asserts. I rather think that the strongest argument against  the scientific journal establishment  is the ruthless racketeering of the academic publishers that profit from it.  Still, I do think he has a point. Scientists who garner esteem and influence in the public domain through their work should be required to defend it our in the open to both scientists and non-scientists alike. I’m not saying that’s easy to do in the face of ill-informed or even illiterate criticism, but it is in my view a necessary price to pay, especially when the research is funded by the taxpayer.

It’s not that I think many scientists are involved in sinister activities, manipulating their data and fiddling their results behind closed doors, but that as long as there is an aura of secrecy it will always fuel the conspiracy theories on which the enemies of reason thrive. We often hear the accusation that scientists behave as if they are priests. I don’t think they do, but there are certainly aspects of scientific practice that make it appear that way, and the closed world of academic publishing is one of the things that desperately needs to be opened up.

For a start, I think we scientists should forget academic journals and peer review, and publish our results directly in open access repositories. In the old days journals were necessary to communicate scientific work. Peer review guaranteed a certain level of quality. But nowadays it is unnecessary. Good work will achieve visibility through the attention others give it. Likewise open scrutiny will be a far more effective way of identifying errors than the existing referee process. Some steps will have to be taken to prevent abuse of the access to databases and even then I suspect a great deal of crank papers will make it through. But in the long run, I strongly believe this is the only way that science can develop in the age of digital democracy.

But scrapping the journals is only part of the story. I’d also argue that all scientists undertaking publically funded research should be required to put their raw data in the public domain too. I would allow a short proprietary period after the experiments, observations or whatever form of data collection is involved. I can also see that ethical issues may require certain data to be witheld, such as the names of subjects in medical trials. Issues will also arise when research is funded commercially rather than by the taxpaper. However, I still maintain that full disclosure of all raw data should be the rule rather than the exception. After all, if it’s research that’s funded by the public, it is really the public that owns the data anyway.

In astronomy this is pretty much the way things operate nowadays, in fact. Maybe stargazers have a more romantic way of thinking about scientific progress than their more earthly counterparts, but it is quite normal – even obligatory for certain publically funded projects – for surveys to release all their data. I used to think that it was enough just to publish the final results, but I’ve become so distrustful of the abuse of statistics throughout the field that I think it is necessary for independent scientists to check every step of the analysis of every major result. In the past it was simply too difficult to publish large catalogues in a form that anyone could use, but nowadays that is simply no longer the case. Astronomers have embraced this reality, and it is liberated them.

To give a good example of the benefits of this approach, take the Wilkinson Microwave Anisotropy Probe (WMAP) which released full data sets after one, three, five and seven years of operation. Scores of groups around the world have done their best to find glitches in the data and errors in the analysis without turning up anything particularly significant. The standing of the WMAP team is all the higher for having done this, although I don’t know whether they would have chosen to had they not been required to do so under the terms of their funding!

In the world of astronomy research it’s not at all unusual to find data for the object or set of objects you’re interested in from a public database, or by politely asking another team if they wouldn’t mind sharing their results. And if you happen to come across a puzzling result you suspect might be erroneous and want to check the calculations, you just ask the author for the numbers and, generally speaking, they send the numbers to you. A disagreement may ensue about who is right and who is wrong, but that’s the way science is supposed to work.  Everything must be open to question. It’s often a chaotic process, but it’s a process all the same, and it is one that has servedus incredibly well.

I was quite surprised recently to learn that this isn’t the way other scientific disciplines operate at all. When I challenged the statistical analysis in a paper on neuroscience recently, my request to have a look at the data myself was greeted with a frosty refusal. The authors seemed to take it as a personal affront that anyone might have the nerve to question their study. I had no alternative but to go public with my doubts, and my concerns have never been satisfactorily answered. How many other examples are there wherein application of the scientific method has come to a grinding halt because of compulsive secrecy? Nobody likes to have their failings exposed in public, and I’m sure no scientists likes see an error pointed out, but surely it’s better to be seen to have made an error than to maintain a front that perpetuates the suspicion of malpractice?

Another, more topical, example concerns the University of East Anglia’s Climatic Research Unit which was involved in the Climategate scandal and which has apparently now decided that it wants to share its data. Fine, but I find it absolutely amazing that such centres have been able to get away with being so secretive in the past. Their behaviour was guaranteed to lead to suspicions that they had something to hide. The public debate about climate change may be noisy and generally ill-informed but it’s a debate we must have out in the open.

I’m not going to get all sanctimonious about `pure’ science nor am I going to question the motives of  individuals working in disciplines I know very little about. I would, however, say that from the outside it certainly appears that there is often a lot more going on in the world of academic research than the simple quest for knowledge.

Of course there are risks in opening up the operation of science in the way I’m suggesting. Cranks will probably proliferate, but we’ll no doubt get used to them- I’m a cosmologist and I’m pretty much used to them already! Some good work may find it a bit harder to be recognized. Lack of peer review may mean more erroneous results see the light of day. Empire-builders won’t like it much either, as a truly open system of publication will be a great leveller of reputations. But in the final analysis, the risk of sticking to our arcane practices is far higher. Public distrust will grow and centuries of progress may be swept aside on a wave of irrationality. If the price for avoiding that is to change our attitude to who owns our data, then it’s a price well worth paying.


Share/Bookmark

Critical Theory

Posted in Art, Music, Science Politics with tags , , , , , on August 18, 2009 by telescoper

Critics say the stangest things.

How about this, from James William Davidson, music critic of The Times from 1846:

He has certainly written a few good songs, but what then? Has not every composer that ever composed written a few good songs? And out of the thousand and one with which he deluged the musical world, it would, indeed, be hard if some half-dozen were not tolerable. And when that is said, all is said that can justly be said of Schubert.

Or this, by Louis Spohr, written in 1860 about Beethoven’s Ninth (“Choral”) Symphony

The fourth movement is, in my opinion, so monstrous and tasteless and, in it’s grasp of Schiller’s Ode, so trivial that I cannot understand how a genius like Beethoven could have written it.

No less an authority than  Grove’s Dictionary of Music and Musicians (Fifth Edition) had this to say about Rachmaninov

Technically he was highly gifted, but also severely limited. His music is well constructed and effective, but monotonous in texture, which consists in essence mainly of artificial and gushing tunes…The enormous popular success some few of Rachmaninov’s works had in his lifetime is not likely to last and musicians regarded it with much favour.

And finally, Lawrence Gillman wrote this in the New York Tribune of February 13 1924 concerning George Gershwin’s Rhapsody in Blue:

How trite and feeble and conventional the tunes are; how sentimental and vapid the harmonic treatment, under its disguise of fussy and futile counterpoint! Weep over the lifelessness of the melody and harmony, so derivative, so stale, so inexpressive.

I think I’ve made my point. We all make errors of judgement and music critics are certainly no exception. The same no doubt goes for literary and art critics too. In fact,  I’m sure it would be quite easy to dig up laughably inappropriate comments made by reviewers across the entire spectrum of artistic endeavour. Who’s to say these comments are wrong anyway? They’re just opinions. I can’t understand anyone who thinks so little  of Schubert, but then an awful lot of people like to listen what sounds to me to be complete dross. There even appear to be some people who disagree with the opinions I expressed yesterday!

What puzzles me most about the critics is not that they make “mistakes” like these – they’re only human after all – but why they exist in the first place. It seems extraordinary to me that there is a class of people who don’t do anything creative themselves  but devote their working lives to criticising what is done by others. Who should care what they think? Everyone is entitled to an opinion, of course, but what is it about a critic that implies we should listen to their opinion more than anyone else?

(Actually, to be precise, Louis Spohr was also a composer but I defy you to recall any of his works…)

Part of the idea is that by reading the notices produced by a critic the paying public can decide whether to go to the performance, read the book or listen to the record. However, the correlation between what is critically acclaimed and what is actually good (or even popular) is tenuous at best. It seems to me that, especially nowadays with so much opinion available on the internet, word of mouth (or web) is a much better guide than what some geezer writes in The Times. Indeed, the   Opera reviews published in the papers are so frustratingly contrary to my own opinion that I don’t  bother to read them until after the performance, perhaps even after I’ve written my own little review on here.  Not that I would mind being a newspaper critic myself. The chance not only to get into the Opera for free but also to get paid for spouting on about afterwards sounds like a cushy number to me. Not that I’m likely to be asked.

In science,  we don’t have legions of professional critics, but reviews of various kinds are nevertheless essential to the way science moves forward. Applications for funding are usually reviewed by others working in the field and only those graded at the very highest level are awarded money.  The powers-that-be are increasingly trying to impose political criteria on this process, but it remains a fact that peer review is the crucial part of the process. It’s not just the input that is assessed either. Papers submitted to learned journals are reviewed by (usually anonymous)  referees, who often require substantial changes to be made the work can be accepted for publication.

We have no choice but to react to these critics if we want to function as scientists. Indeed, we probably pay much more attention to them than artists do of critics in their particular fields. That’s not to say that these referees don’t make mistakes either. I’ve certainly made bad decisions myself in that role,  although they were all made in good faith. I’ve also received comments that I thought were unfair or unjustifiable, but at least I knew they were coming from someone who was a working scientist.

I suspect that the use of peer review in assessing grant applications will remain in place for a some considerable time. I can’t think of an alternative, anyway. I’d much rather have a rich patron so I didn’t have to bother writing proposals all the time, but that’s not the way it works in either art or science these days.

However, it does seem to me that the role of referees in the publication process is bound to become redundant in the very near future. Technology now makes it easy to place electronic publications on an archive where they can be accessed freely. Good papers will attract attention anyway, just as they would if they were in refereed journals. Errors will be found. Results will be debated. Papers will be revised. The quality mark of a journal’s endorsement is no longer needed if the scientific community can form its own judgement, and neither are the monstrously expensive fees charged to institutes for journal subscriptions.

Follow

Get every new post delivered to your Inbox.

Join 3,269 other followers