I’ve already posted about the absurdity of scientific papers with ridiculously long author lists but this issue has recently come alive again with the revelation that the compilers of the Times Higher World University Rankings decided to exclude such papers entirely from their analysis of citation statistics.
Large collaborations involving not only scientists but engineers, instrument builders, computer programmers and data analysts – are the norm in some fields of science – especially (but not exclusively) experimental particle physics – so the arbitrary decision to omit such works from bibliometric analysis is not only idiotic but also potentially damaging to a number of disciplines. The “logic” behind this decision is that papers with “freakish” author lists might distort analyses of citation impact, even allowing – heaven forbid – small institutions with a strong involvement in world-leading studies such as those associated with the Large Hadron Collider to do well compared with larger institutions that are not involved in such collaborations. If what you do doesn’t fit comfortably within a narrow and simplistic method of evaluating research, then it must be excluded even if it is the best in the world. A sensible person would realise that if the method doesn’t give proper credit then you need a better method, but the bean counters at the Times Higher have decided to give no credit at all to research conducted in this way. The consequences of putting the bibliometric cart in front of the scientific horse could be disastrous, as insitutions find their involvement in international collaborations dragging them down the league tables. I despair of the obsession with league tables because these rankings involve trying to shoehorn a huge amount of complicated information into a single figure of merit. This is not only pointless, but could also drive behaviours that are destructive to entire disciplines.
That said, there is no denying that particle physicists, cosmology and other disciplines that operate through large teams must share part of the blame. Those involved in these collaborations have achieved brilliant successes through the imagination and resourcefulness of the people involved. Where imagination has failed however is to carry on insisting that the only way to give credit to members of a consortium is by making them all authors of scientific papers. In the example I blogged about a few months ago this blinkered approach generated a paper with more than 5000 authors; of the 33 pages in the article, no fewer than 24 were taken up with the list of authors.
Papers just don’t have five thousand “authors”. I even suspect that only about 1% of these “authors” have even read the paper. That doesn’t mean that the other 99% didn’t do immensely valuable work. It does mean that pretending that they participated in writing the article that describes their work isn’t be the right way to acknowledge their contribution. How are young scientists supposed to carve out a reputation if their name is always buried in immensely long author lists? The very system that attempts to give them credit renders that credit worthless. Instead of looking at publication lists, appointment panels have to rely on reference letters instead and that means early career researchers have to rely on the power of patronage.
As science evolves it is extremely important that the methods for disseminating scientific results evolve too. The trouble is that they aren’t. We remain obsessed with archaic modes of publication, partly because of innate conservatism and partly because the lucrative publishing industry benefits from the status quo. The system is clearly broken, but the scientific community carries on regardless. When there are so many brilliant minds engaged in this sort of research, why are so few willing to challenge an orthodoxy that has long outlived its usefulness. Change is needed, not to make life simpler for the compilers of league tables, but for the sake of science itself.
I’m not sure what is to be done, but it’s an urgent problem which looks set to develop very rapidly into an emergency. One idea appears in a paper on the arXiv with the abstract:
Science and engineering research increasingly relies on activities that facilitate research but are not currently rewarded or recognized, such as: data sharing; developing common data resources, software and methodologies; and annotating data and publications. To promote and advance these activities, we must develop mechanisms for assigning credit, facilitate the appropriate attribution of research outcomes, devise incentives for activities that facilitate research, and allocate funds to maximize return on investment. In this article, we focus on addressing the issue of assigning credit for both direct and indirect contributions, specifically by using JSON-LD to implement a prototype transitive credit system.
I strongly recommend this piece. I don’t think it offers a complete solution, but certainly contains many interesting ideas. For the situation to improve, however, we have to accept that there is a problem. As things stand, far too many senior scientists are in denial. This has to change.Follow @telescoper