Archive for preprints

Thirty Years of Preprints

Posted in Open Access with tags , , , , on February 21, 2021 by telescoper

I thought I’d share an interesting paper (by Xie, Shen & Wang) that I found on the arXiv with the title Is preprint the future of science? A thirty year journey of online preprint services. The abstract reads:

Preprint is a version of a scientific paper that is publicly distributed preceding formal peer review. Since the launch of arXiv in 1991, preprints have been increasingly distributed over the Internet as opposed to paper copies. It allows open online access to disseminate the original research within a few days, often at a very low operating cost. This work overviews how preprint has been evolving and impacting the research community over the past thirty years alongside the growth of the Web. In this work, we first report that the number of preprints has exponentially increased 63 times in 30 years, although it only accounts for 4% of research articles. Second, we quantify the benefits that preprints bring to authors: preprints reach an audience 14 months earlier on average and associate with five times more citations compared with a non-preprint counterpart. Last, to address the quality concern of preprints, we discover that 41% of preprints are ultimately published at a peer-reviewed destination, and the published venues are as influential as papers without a preprint version. Additionally, we discuss the unprecedented role of preprints in communicating the latest research data during recent public health emergencies. In conclusion, we provide quantitative evidence to unveil the positive impact of preprints on individual researchers and the community. Preprints make scholarly communication more efficient by disseminating scientific discoveries more rapidly and widely with the aid of Web technologies. The measurements we present in this study can help researchers and policymakers make informed decisions about how to effectively use and responsibly embrace a preprint culture.

The paper makes a number of good arguments, backed up with evidence, as to why preprints are a good idea. I recommend reading it.

Here is Figure 1 from the paper:

(Parts of the chart are difficult to read, so see the paper for details).

This shows that about 50% of all preprints are in the areas of physics and mathematics and their distribution mode is predominantly through the arXiv. Other scientific disciplines have much lower prevalence of preprints, e.g. biology. I’ve been putting my papers on arXiv since the early Nineties, i.e. for most of the duration of the period covered by the paper. I don’t know why other fields are so backward.

It’s standard practice in my own field of astrophysics to put preprints of articles on the arXiv but younger readers will probably not realize that preprints were not always produced in the electronic form they are today. We all used to make large numbers of these and post them at great expense to (potentially) interested colleagues before publication in order to get comments. That was extremely useful because a paper could take over a year to be published after being refereed for a journal: that’s too long a timescale when a PhD or PDRA position is only a few years in duration. The first papers I was given to read as a new graduate student in 1985 were all preprints that were not published until well into the following year. In some cases I had more or less figured out what they were about by the time they appeared in a journal!

The practice of circulating preprints persisted well into the 1990s. Usually these were produced by institutions with a distinctive design, logo, etc which gave them a professional look, which made it easier to distinguish `serious’ papers from crank material (which was also in circulation). This also suggested that some internal refereeing inside an institution had taken place before an “official” preprint was produced and this lending it an air of trustworthiness. Smaller institutions couldn’t afford all this, so were somewhat excluded from the preprint business.

With the arrival of the arXiv the practice of circulating hard copies of preprints in astrophysics gradually died out, to be replaced by ever-increasing numbers of electronic articles. The arXiv does have some gatekeeping – in the sense there are some controls on who can deposit a preprint there – but it is definitely far easier to circulate a preprint now than it was.

It is still the case that big institutions and collaborations insist on quite strict internal refereeing before publishing a preprint – and some even insist on waiting for a paper to be accepted by a journal before adding it to the arXiv – but there’s no denying that among the wheat there is quite a lot of chaff, some of which attracts media coverage that it does not deserve. It must be admitted, however, that the same can be said of some papers that have passed peer review and appeared in high-profile journals! No system that is operated by human beings will ever be flawless, and peer review is no different.

Nowadays, in astrophysics, the single most important point of access to scientific literature is through the arXiv, which is why the Open Journal of Astrophysics was set up as an overlay journal to provide a level of rigorous peer review for preprints, not only to provide a sort of quality mark but also to improve the paper through the editorial process.

So is the preprint the future of science? I think that depends on how far ahead you are willing to look. In my opinion we are currently in an era of transition trying to shoehorn old publishing practices into a digital world. At some point in the future people will realize that the scientific paper itself – whether a preprint or not – is an outmoded 18th Century concept and there are far more effective ways of disseminating scientific ideas and information at our fingertips if only we stopped living in the past.

Arguing the Case for Preprints

Posted in Open Access with tags , , , , on September 23, 2020 by telescoper

This is Peer Review Week 2020 as part of which I am participating tomorrow afternoon (Irish Time) in a live panel discussion/webinar called Increasing transparency and trust in preprints: Steps journals can take.

Working in a field like astrophysics, where the use of preprints as a means of disseminating information and ideas is well established, I’m always surprised that some people working in other disciplines don’t really approve of them at all. See for example, this Twitter thread. Still, even in the biosciences, preprints have their advocates and there are signs that attitudes may be changing.

That is not to say that things aren’t changing in astrophysics too. One of the interesting astronomical curiosities I’ve acquired over the years is a preprint of the classic work of Burbidge, Burbidge, Fowler and Hoyle in 1957 (a paper usually referred to as B2FH after the initials of its authors). It’s such an important contribution, in fact, that it has its own wikipedia page.

Younger readers will probably not realize that preprints were not always produced in the electronic form they are today. We all used to make large numbers of these and post them at great expense to (potentially) interested colleagues before publication in order to get comments. That was extremely useful because a paper could take over a year to be published after being refereed for a journal: that’s too long a timescale when a PhD or PDRA position is only a few years in duration. The first papers I was given to read as a new graduate student in 1985 were all preprints that were not published until well into the following year. In some cases I had more or less figured out what they were about by the time they appeared in a journal!

The B2FH paper was published in 1957 but the practice of circulating preprints persisted well into the 1990s. Usually these were produced by institutions with a distinctive design, logo, etc which gave them a professional look, which made it easier to distinguish `serious’ papers from crank material (which was also in circulation). This also suggested that some internal refereeing inside an institution had taken place before an “official” preprint was produced and this lending it an air of trustworthiness. Smaller institutions couldn’t afford all this, so were somewhat excluded from the preprint business.

With the arrival of the arXiv the practice of circulating hard copies of preprints in astrophysics gradually died out, to be replaced by ever-increasing numbers of electronic articles. The arXiv does have some gatekeeping – in the sense there are some controls on who can deposit a preprint there – but it is far easier to circulate a preprint now than it was.

It is still the case that big institutions and collaborations insist on quite strict internal refereeing before publishing a preprint – and some even insist on waiting for a paper to be accepted by a journal before adding it to the arXiv – but there’s no denying that among the wheat there is quite a lot of chaff, some of which attracts media coverage that it does not deserve. It must be admittted, however, that the same can be said of some papers that have passed peer review and appeared in high-profile journals! No system that is operated by human beings will ever be flawless, and peer review is no different.

Nowadays, in astrophysics, the single most important point of access to scientific literature is through the arXiv, which is why the Open Journal of Astrophysics was set up as an overlay journal to provide a level of rigorous peer review for preprints, not only to provide a sort of quality mark but also to improve the paper through the editorial process.

As for increasing transparency and trust in preprints, I think I’ll save some suggestions for tomorrow’s webinar. A good start, however, would be for journals to admit their own limitations and start helping rather than hindering the dissemination of information and ideas.