The Transparent Dishonesty of the Research Excellence Framework

Some of my colleagues in the School of Physics & Astronomy recently attended a briefing session about the  forthcoming Research Excellence Framework. This, together with the post I reblogged earlier this morning, suggested that I should re-hash an article I wrote some time ago about the arithmetic of the REF, and how it will clearly not do what it says on the tin.

The first thing is the scale of the task facing members of the panel undertaking the assessment. Every research active member of staff in every University in the UK is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The Physics panel comprises 20 members.

As a rough guess I’d say that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There are 20 members of the panel, so that means that between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day. Every day. Weekends included.

Now we are told the panel will use their expert judgment to decide which outputs belong to the following categories:

  • 4*  World Leading
  • 3* Internationally Excellent
  • 2* Internationally Recognized
  • 1* Nationally Recognized
  • U   Unclassified

There is an expectation that the so-called QR  funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. In other words “Internationally recognized” research will probably be deemed completely worthless by HEFCE. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?

The panel members will apparently know enough about every single one of the papers they are going to read in order to place them  into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not.

We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by   Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals. No doubt Elsevier are  on a nice little earner peddling meaningless data for the HECFE bean-counters, but I haven’t any confidence that it will add much value to the assessment process.

There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs”  are published, including a recent pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.

I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.

The involvement of  a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.

Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used;  we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors?  All the documentation is shredded after the results are published. Who can trust such a system?

To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become  entirely self-serving. It is imposing increasingly  ridiculous administrative burdens on researchers, inventing increasingly  arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.

29 Responses to “The Transparent Dishonesty of the Research Excellence Framework”

  1. Rob Ivison Says:

    What is that leads us to publish in these expensive and exasperatingly slow journals, when any number of open-access options are available? I’m not convinced it is the REF. More likely it’s a combination of snobbery, resistance to change, plus simple things like the widespread availability of bibtex and style files, and a reluctance to ditch years spent learning where to place hyphens to appease MNRAS typesetters (bar Smail, who still prefers the random approach).

    As an STFC employee, I am now unable to hold an STFC postdoc grant in my own right. I’m therefore hopeful that they’ll introduce a new category for me to aspire to: “world-leading, with hand tied behind back”. Thankfully, I’m not obliged to sell my soul to the REF, though my points are available for ten bob and a pint of heavy, should anyone want them.

    Keep fighting the good fight, Peter, regarding both REF and open access.

    • MNRAS-are the only-journal willing to put-up with some-of-your more-flowery prose…

      • Rob Ivison Says:

        Forsooth! Yet those bounders censored my most recent acknowledgements, which were entirely factual, if slightly off-colour 🙂

    • “What is that leads us to publish in these expensive and exasperatingly slow journals, when any number of open-access options are available?”

      For those without permanent jobs, the fear that only “real” journals will count in assessment. This is the way it is, and this is OK as far as it goes—there are any number of crackpot journals one can publish in. OK, if one knows the person, works in the field etc, then one can read all the papers and form an opinion. But if 100 people apply for a job and each has, say, 30 publications, no-one will read 3000 papers. Some sort of filter is necessary.

      For those with permanent jobs, be my guest. 😐

      As I have said before, this is a separate issue from that of overpriced journals. The RAS can have a web page of “approved” papers, with links to the arXiv. That’s it. I’ll even volunteer to host it. Costs are negligible. But this would mean severing ties with MNRAS, at least in its present form. Of course, its present form is relatively new. The RAS should break ties with commercial publishers and use the MNRAS name for the web page of links to approved arXiv papers.

      A fellow in good standing needs to make such a proposal at an RAS meeting. I’m waiting. Many readers of this blog are RAS fellows (I’m not, and I’m not a UK citizen and I no longer work in the UK). It seems to me that professional societies are the means to this end.

  2. This seems such a farce that some sort of public strike on the part of those involved would be in order. If the “quality” newspapers have a headline like “Nobel-Prize winner says REF is a farce” (or even “leading boffins say REF is a farce”) then perhaps something will happen. If all play along, what will change?

    For a start, when will a member of the RAS suggest that the RAS discontinue its relation with MNRAS and start up a web-based journal, which will consist of nothing more than a web page with links to arXiv papers which have been deemed to meet the standards of the RAS (i.e. the same standard of refereeing as is the case with MNRAS)?

    Whatever one thinks of refereeing, it will be much easier to move away from overpriced journals if refereeing stays intact. Also, many people think that refereeing provides many advantages, including the fact that most of the astronomy stuff on arXiv is OK because most is eventually at least submitted to a refereed journal.
    By all means continue to debate the pros and cons of refereeing, but separate it from the issue of overpriced journals.

    • Anton Garrett Says:

      Agreed with all you say here – strongly – and a very good suggestion too. May it come about soon.

      • If Anton strongly agrees with me, there’s no stopping us now!

      • Of course, anyone who suggests essentially doing away with MNRAS will not make any friends, and might alienate some old ones, at MNRAS or Wiley-Blackwell. A bit dangerous if one needs to publish there. I don’t see any need to do away with MNRAS; one should keep the “brand” but just change the publishing.

    • Dave Carter Says:

      You mean a boycott as in this:

      And letters from Nobel prize winners like these:

      Just browse the blog comments to get some flavour of what reaction from outside your core audience would be.

      As far as MNRAS goes, it would maybe be good if at the end of its current publishing contract it moved to the same publisher as ApJ and AJ use. Then at least the profits would go back into science.

      And as far as arXiv goes, the panel chair and the HEFCE REF manager were fairly clear at a briefing I attended (and other readers her would also have done) that papers on arXiv were eligible and would be treated the same way as papers in printed journals. The only anomaly, since cleared up in the revised working methods, is whether papers which were in arXiv before 31/12/2008 but not published in the final journal after 01/01/2009 were eligible.

      And as far as journals are concerned, I wouldn’t think that publishing in New Astronomy, on the grounds that its the only way of ensuring the details are correct in Scopus, was a very good idea.

      • Blog comments are probably a worse indicator of quality than SCOPUS citation rates. 😐 Nevertheless, comments on the first article seem to be mainly a bickering between unions, with the article just a starting point. The comments on the second articles seem to be a rather balanced discussion, as blog comments go.

        With regard to arXiv, isn’t the point whether papers there would be eligible or not moot if SCOPUS doesn’t include them?

      • Dave Carter Says:

        No, because Scopus is only used as a source of citation data, and even then only as a secondary indicator. The main indicator is the judgement of the hard-working panel, based upon reading the papers.

      • All 640 of them. 😐

      • Dave Carter Says:

        They have a year, thats around 200 working days, are you saying you couldn’t read 3 papers per day? I would think that their universities would give them the year off teaching.

      • Read 3 papers a day? Yes, if not burdened with teaching and administration. Read 3 papers in 8 hours, understand them, rate them in relation to each other? A tall order, since I assume that most people involved do have other things to do.

        Consider the typical time that it takes to referee a paper, and that in that case one is not directly concerned with directly comparing it to other papers.

  3. […] “Some of my colleagues in the School of Physics & Astronomy recently attended a briefing session about the forthcoming Research Excellence Framework …” (more) […]

  4. Nick Cross Says:

    There is a paper on arXiv about declining Impact Factors for journals since the 1990s:

    • Anton Garrett Says:

      Refereeing is not the only decent function of journals that needs to be replicated in a post-parasitical system. There is also the fact that reserchers are frequently asked to make changes in the logic flow of their write-up in order to make it comprehensible. These might be small or large to execute, but they make a huge difference to the reader. And I don’t just mean standard of prose.

      • I think of this as part of the refereeing process, though of course some suggestions might come from the referee and some from the editor.

  5. Bryn Jones Says:

    When discussing the value or otherwise of the Research Excellence Framework we have to ask how government research funding could be allocated to universities without something like the REF. It might be nice to see it scrapped, but can we think of any better method of distributing funding on the basis of research quality?

    I’m not convinced that allocating the funding according to grant income from other sources, or according to numbers of PhD students, would work because of the differences in funding from one academic discipline to another.

    • Monica Grady Says:

      Take a census of all academics. Make the radical, and totally unjustified, assumption that the higher up the greasy pole you are, the better your research. Assign 1 to a junior lecturer, 2 to a senior lecturer, 3 to a prof. Add up all the points. Take the total pot of money, divide by number of points, and allocate according to the weighting, so a prof gets three times as much as a junior lecturer. Think of all the time saved in not having to read, write or review grant proposals…

      • Think of the incentives to promote people! (Better still, you could do it the Oxford way, where you can be called ‘Professor’ without actually being paid a prof’s salary…)

      • Anton Garrett Says:

        Oxford? Just go to the USA where most acaemics are ‘professors’ of some rank, and come back here on sabbatical.

      • Ken Rice Says:

        To me, a solution is to use metrics. The main issue with metrics is that it’s difficult to calibrate (or normalise) them. Using metrics to assess individual researchers is a poor way to determine quality. Using metrics to compare two universities that both cover a broad range of research areas may, on the other hand, give quite a good indication of their relative qualities. I would also argue that we should also not attempt to determine which university is best, which is second, etc, but rather have broad bands (top 5, next 5, … for example). In a sense I’m suggesting that REF should become an assessment of the whole university, rather than of individual departments. One could argue that universities would then try to expand those areas that are typically highly cited and close down those that are not. However, there is a finite amount of funding coming through other sources (the research councils for example) and so there is a limit to how big the highly cited areas can get before it’s no longer financially viable. As long as there are multiple funding sources (REF, Research Council Grants, Wellcome Trust, Leverhulme, ERC…) it should be possible to set the different levels of funding in such a way that no single funding source determines how universities behave.

    • Bryn Jones Says:

      Distributing funding according to the numbers and ranks of academic staff would give as much research funds to teaching-only universities as to research-intensive institutions.

      I’m not sure I’d like to see as much money going to the Newport Pagnell Metropolitan University as to the research-intensive Open University up the road.

  6. John Peacock Says:

    Peter, as a member of the physics panel, perhaps I’m not permitted to criticise the process – but I guess the worst that could happen is that I get sacked and don’t have to read my 640 papers…

    I think REF is a missed opportunity to do something sensible at minimal overhead. As Ken Rice points out, use of citation data to assess quality always founders on the difficulty of cross-calibrating different fields. But we *had* the necessary calibration in the form of the RAE2008 results. These were flawed for the reasons REF will be flawed, but it’s probably the best that can be done. It should have been possible to design an algorithm that eats 2008 metrics data and attempts to replicate RAE2008; once the algorithm is optimised, you feed it the 2014 metrics, and you’re done.

    Apart from being faster, such an approach could actually be more accurate than REF, because it can include everything we know about researchers – which is a lot more than 4 papers. Quite a few people may well produce four 4* papers, but some rare individuals could probably produce dozens. The REF algorithm has no way of rewarding these top levels of productivity. So clearly we should look at more papers – but direct reading just isn’t possible in the time available. So in summary we will do a much poorer job at discriminating different levels of research capability than we ought to, and it will take us far longer than doing a better job. How did we let this happen?

  7. Anton Garrett Says:

    Too bad that universities don’t play each other at physics as they do in sports. Then it would be a lot easier to run a fantasy league…

  8. […] or citations to make their assessment. It has, however, already been pointed out that this claim is unlikely to be credible. In Physics, there will probably be something like 6500 papers each of which will supposedly be […]

  9. […] in the United States despite having an entirely different structure.  The REF in the UK, to quote one critic, has imposed “increasingly  ridiculous administrative burdens on researchers, inventing […]

  10. […] in the United States despite having an entirely different structure.  The REF in the UK, to quote one critic, has imposed “increasingly  ridiculous administrative burdens on researchers, inventing […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: