Counting for the REF

It’s a lovely day in Brighton and I’m once again on campus for an Admissions Event at Sussex University, this time for the Mathematics Department in the School of Mathematical and Physical Sciences.  After all the terrible weather we’ve had since I arrived in February, it’s a delight and a relief to see the campus at its best for today’s crowds. Anyway, now that I’ve finished my talk and the subsequent chats with prospective students and their guests I thought I’d do a quick blogette before heading back home and preparing for this evenings Physics & Astronomy Ball. It’s all go around here.

What I want to do first of all is to draw attention to a very nice blog post by a certain Professor Moriarty who, in case you did not realise it, dragged himself away from his hiding place beneath the Reichenbach Falls and started a new life as Professor of Physics at Nottingham University.  Phil Moriarty’s piece basically argues that the only way to really judge the quality of a scientific publication is not by looking at where it is published, but by peer review (i.e. by getting knowledgeable people to read it). This isn’t a controversial point of view, but it does run counter to the current mania for dubious bibliometric indicators, such as journal impact factors and citation counts.

The forthcoming Research Excellence Framework involves an assessment of the research that has been carried out in UK universities over the past five years or so, and a major part of the REF will be the assessment of up to four “outputs” submitted by research-active members of staff over the relevant period (from 2008 to 2013). reading Phil’s piece might persuade you to be happy that the assessment of the research outputs involved in the REF will be primarily based on peer review. If you are then I suggest you read on because, as I have blogged about before, although peer review is fine in principle, the way that it will be implemented as part of the REF has me deeply worried.

The first problem arises from the scale of the task facing members of the panel undertaking this assessment. Each research active member of staff is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The panel comprises 20 members.

As a rough guess let’s assume that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty  close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There is some uncertainty in these figures because (a) there is plenty of evidence that departments are going to be more selective in who is entered than was the case in 2008 and (b) some departments have increased their staff numbers significantly since 2008. These two factors work in opposite directions so not knowing the size of either it seems sensible to go with the numbers from the previous round for the purposes of my argument.

There are 20 members of the panel so 6400 papers submitted means that, between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day…

It is therefore blindingly obvious that whatever the panel does do will not be a thorough peer review of each paper, equivalent to refereeing it for publication in a journal. The panel members simply won’t have the time to do what the REF administrators claim they will do. We will be lucky if they manage a quick skim of each paper before moving on. In other words, it’s a sham.

Now we are also told the panel will use their expert judgment to decide which outputs belong to the following categories:

  • 4*  World Leading
  • 3* Internationally Excellent
  • 2* Internationally Recognized
  • 1* Nationally Recognized
  • U   Unclassified

There is an expectation that the so-called QR  funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. The word on the street is that the weighting for 4* will be 9 and that for 3* only 1. “Internationally recognized”  will be regarded as worthless in the view of HEFCE. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?

The panel members will apparently know enough about every single one of the papers they are going to read in order to place them  into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not. The steep increase in weighting between 3* and 4* means that this judgment could mean a drop of funding that could spell closure for a department.

We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by   Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals.  No doubt Elsevier are  on a nice little earner peddling meaningless data for the HECFE bean-counters, but I have no confidence that they will add any value to the assessment process.

There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs”  are published, including a pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.

I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.

The involvement of  a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.

Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used;  we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors?  All the documentation is shredded after the results are published. Who can trust such a system?

To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become  entirely self-serving. It is imposing increasingly  ridiculous administrative burdens on researchers, inventing increasingly  arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.

And that’s all just about “outputs”. I haven’t even started on “impact”….


53 Responses to “Counting for the REF”

  1. Hi,

    Disclaimer: I’m not involved in the REF in any capacity. I’m not even submitting to it. I work at a UK research institute that is exempt from the REF – we have an independent review process.

    I don’t think the situation is anywhere near as bad as you make out, in your post.

    It’s pretty self-evident that the REF is *not* the only reason for using ‘traditional journals’ – you only have to see the many complaints on the internet from junior faculty and postdocs about the kudos still attached to ‘big-name’ journals for getting promotions, tenure, or new jobs, to realise that most of the problem is internal to academia. We experience the lingering kudos of Nature/Science/Cell publications at our institute and don’t take part in the REF. The REF process at least aims to consider the value of manuscripts at an individual manuscript’s citation level, adjusted for the field (where appropriate), and not only the impact factor of the journal they were printed in – not so for many academic interviewers or promotion committees! A manuscript in PLoS One with the same level of impact/citations as one in Science should be scored just as highly under REF, as I believe you will see if you read the assessment criteria.

    As for Elsevier’s involvement in citation counts, there are essentially only three viable options for obtaining this data reliably, and they don’t agree with each other on counts. There’s Elsevier/SCOPUS, Thomson-Reuters/SCI, and Google Scholar. Sadly, Google Scholar appear to have been technically unable to support the REF requirements for bulk access to data – due to publisher agreements: – this doesn’t leave much choice, at the moment.

    The assessment criteria are clear, and freely and publicly available. You may like to read the panel B criteria more closely: – from your blog topics I suppose you’ll be dealt with by subpanel 9 (if you’re submitted). Relevant paragraphs in that document include:

    32, which mentions outputs that the main panel welcome, and these include “published papers in peer-reviewed journals”. ArXiv is not a peer-reviewed journal (see, e.g. – “nor is it a refereed publication venue.”). I expect you can submit ArXiv material if you want, but why *should* you expect it to be assessed in the same way as a peer-reviewed publication, when it isn’t one?

    54, “Sub-panels 7, 8, 9 and 10 consider that, within their disciplines, normally all the relevant information that the panel requires will be contained in the submitted outputs and the accompanying citation data, where the latter are provided by the REF system […]”

    59, “Sub-panels 7, 8, 9 and 11 acknowledge that citation data are widely used and consider that it is well understood in the disciplines covered in their UOAs. These sub-panels will make use of citation data, where available, as part of the indication of academic significance to inform their assessment of output quality.”

    60, “Where available on the Scopus citation database, the REF team will provide citation counts for research outputs […] sub-panels will also receive discipline-specific contextual information about citation rates […] to inform, if appropriate, the interpretation of citation data.”

    62, “a. Where available and appropriate, citation data will form part of the process of assessment, in relation to the academic significance of outputs. It will be used as one element to inform peer review judgements made about output quality, and will not be used as a primary tool in the assessment.
    b. The absence of citation data for an output will not be taken to mean an absence of academic significance.
    c. Sub-panels will be mindful that for some forms of output (for example relating to applied research) and for recent outputs, citation data may be an unreliable indicator. Sub-panels will take due regard of the potential equalities implications of using citation data.”

    In particular, paragraph 68 provides a clear description of the criteria for each of the significance levels – too long to reproduce here.


    • telescoper Says:

      In my field (which is astrophysics) papers are posted on the arxiv as well as peer reviewed journals, and many gain citations on the former before they appear on the latter. These are missed by the services you provide. They are picked up on the NASA/ADS system, which the panel isn’t supposed to use (but probably will).

      It is telling that you refer to “impact/citations” rather than “quality” before the bits of REF documentation you copied in. It’s the entire point that these are not synonymous, and that panels will not be able to judge quality because they will be overloaded.

      • Papers in my field (quantitative biology) are also be posted to ArXiv as well as to peer reviewed journals. I’m quite familiar with it, so I understand the dynamics, and that it’s not a peer-reviewed publication any more than, say, this blog is. Yes, SCOPUS/SCI miss out ArXiv citations, but that fact is not in debate. You aren’t prevented from submitting ArXiv publications, it’s just that citation counts won’t be used when assessing ‘quality’. And, to the extent that peer review is an indicator of ‘quality’ (and you appear to agree with Moriarty that it is), *that* is also absent, which may be more of an issue…

        I’m not going to defend the idea that citation counts are a proxy for quality. That’s because I don’t believe that they are. If you read the REF extracts (or, better, the whole document), you’ll find that they agree with us both. See para 62 again: “It will be used as one element to inform peer review judgements made about output quality, and will not be used as a primary tool in the assessment.” They acknowledge explicitly that citation data is not always an indicator of quality (para 62c). This point that you make appears already to be well understood by all concerned; there are, and have always been, valid criticisms to be made of the REF exercise (submission eligibility of postdocs/fixed-term staff, and that potential 3*/4 funding cliff for instance – let alone whether it should take place at all), but the inappropriateness of citation/quality equivalence isn’t a good one.


      • telescoper Says:

        I may have misunderstood, but your second paragraph appears to argue against itself.

      • I don’t see how the second paragraph implies that. Maybe bullet points might be clearer:

        1) In your post you criticise the REF on the grounds that, inter alia, citation counts aren’t a reliable proxy for quality
        2) I agree that citations aren’t a reliable proxy for quality
        3) The REF documentation states very clearly that citations aren’t a reliable proxy for quality (which is why they are only one indicator of quality, and not the primary indicator, at that).
        4) Therefore we all (you, me and the REF) agree.
        5) It follows that your criticism of the REF on those grounds is misplaced

        I did read your whole piece, and this one that it links to ( which gave me a feeling of deja vu 😉 – and a couple of others). I disagree with you. You say that the REF panel will have a heavy workload, and we agree on that. You seem to assume, however, that the REF panel *should* be reviewing papers as though they are assessing them for publication (“It is therefore blindingly obvious that whatever the panel does do will not be a thorough peer review of each paper, equivalent to refereeing it for publication in a journal. The panel members simply won’t have the time to do what the REF administrators claim they will do.”) I can’t seem to find anywhere that this claim is made by ‘REF administrators’. Also, I don’t agree that they should review papers in that depth: it would take an insane amount of time, and it’s already been done at least once – and more thoroughly – for the peer-reviewed outputs. I don’t think that the REF panel, or their administrators, think that they should review submitted papers in that depth, either (in passing, I suggest that it’s probably better for most academics’ sanity and ego that published work is rarely examined so closely again…). Your premise here is, I think, invalid. I also believe it’s perfectly possible for most academics to get through more than two papers per day and assess their relative quality in a manner appropriate to this kind of assessment. If you’ve ever participated in a review process in that capacity (not necessarily REF, maybe even only an individual’s performance review) you’ll undoubtedly have done the same.

        Anecdotally, my experience with the RAE was that some academic departments formed their own idea of what the process required, or ‘should’ be, and submitted according to that idea, rather than what was written in the documentation. Those departments often undersold their performance needlessly and, although individuals aren’t graded in the process, none of the individuals in a department benefit when that happens. My advice (for grant applications and other things, too…) is always to read the documentation. If the rules aren’t being adhered to you can always appeal after the fact. If you end up following what you *think* the rules are without checking, things can go wrong.


      • telescoper Says:

        “I’m not going to defend the idea that citation counts are a proxy for quality. That’s because I don’t believe that they are. .. there are, and have always been, valid criticisms to be made of the REF exercise ..but the inappropriateness of citation/quality equivalence isn’t a good one.”

      • 1) I really don’t understand what you’re getting at with that quote. What do you think I was saying? In your post you criticise the REF on grounds that they will equate citation count with quality: something they say they won’t do. In fact, something that the REF documentation explicitly agrees with you on (note that an indicator of quality is not a proxy for quality). You, me and the REF docs all agree that that citation count doesn’t equal quality. There are problems with the REF, but that isn’t one of them.

        2) If your criticism really does boil down to nothing more than that you expect that the panel’s future behaviour won’t comply with the published guidelines (and I’ve not yet seen any evidence that this happened last time…), then you don’t have a substantive argument, only supposition that veers towards tongue-in-cheek conspiracy theory when you talk about “what panels will do behind closed doors and before they shred all the evidence” (although you might note that compliance with the Data Protection Act can require organisations to shred personal data – we have to do it with the CVs of job applicants and interview notes, post-interview). You’re either saying that the panel will knowingly act in bad faith, or that you’ve seen the future – there’s no discussion to be had either way, so I’m going to leave off, at this point.

        Have a good weekend,


      • telescoper Says:

        1. I quoted it to demonstrate my problem understanding your reasoning. It appears to say the citations can’t be used as a proxy for quality, but that my criticism of citations as a proxy for quality are invalid.

        2. At open briefings ahead of the RAE it was clearly stated by the panel chair for physics that citation data would not be used. Panellists subsequently admitted ignoring this instruction.

        I don’t think the Data Protection Act is relevant as the important documents pertain not to personal data but to how the panel’s deliberations were conducted. Destruction of this evidence was more likely to have been in order to prevent FOI requests or even a judicial review which would have established the difference between published rules and actual practice.

      • telescoper Says:

        And I repeat my point – what’s written the rules is not the same as what panels will do behind closed doors and before they shred all the evidence. It wasn’t last time and it won’t be this time either.

      • telescoper Says:

        Ps. If you read my piece you’ll see why the panels will in practice have to use citations as a primary tool: they won’t have time to do anything else.

      • Dave Carter Says:

        Most papers on arXiv, and all of mine, are peer-reviewed. You can tell that from the note under the abstract which says something like “Accepted for publication in MNRAS”. It may be that you might want to submit a paper which has been accepted by the end of December 2013, but is scheduled for publication in the January or February 2014 edition. That of course means you will not be able to use it next time. If there is a next time, and if you think you will be around by next time.

        Hopefully by next time the very concept of paper publication will have gone, and publication will be able to follow on very quickly from acceptance.

        Its also worth quoting in full paragraph 32 of the Panel B working methods, which says:

        “The main panel welcomes all forms of output submitted to its sub-panels, including:

        – books, book chapters and research monographs

        – conference papers and reports

        – new materials devices products and processes

        – patents

        – published papers in peer-reviewed journals

        – software, computer code and algorithms

        – standards documents

        – technical reports, including confidential reports”

        Notwithstanding this, I would assume that the vast majority of outputs submitted in REF2 to subpanel will be peer-reviewed papers. However there is a chance that outputs submitted as evidence of underpinning research in REF3b (Impact case studies) may not be. They might for instance be Proc SPIE papers, a grey area as far as peer review is concerned.

        Also when assessing quality of papers submitted to REF2, it is important to remember what the criteria are, they are Rigour, Originality, and Significance. Citations may be an indicator of Significance, or they may not be, but Rigour and Originality are key, and it may well be that your most highly cited papers are not the most rigorous or original.

      • telescoper Says:

        “Your manuscript is both good and original. But the part that is good is not original, and the part that is original is not good..”

        (attributed to Samuel Johnson)

    • I must strongly agree with Peter here – I think that you’re being just a little naive in imagining that the REF guidance document is a perfect representation of the actual working practices of the REF panels. The devil is in the (implicit/tacit) detail…

      If the panels will indeed rigidly follow the rules, why is there the requirement to shred the notes from the meetings after the results are announced? This alone suggests that it’s not quite as cut-and-dried as you make out. (It’s also a significant wedge of public money we’re discussing so one could question the ethics of shredding the documentation. But that’s an argument for another day.).

      • Bryn Jones Says:

        Of course, freedom of information requests would have been made to disclose the contents of the 2008 RAE papers had they not been shredded. My view is that the shredding policy exists to obstruct this.

      • Dave Carter Says:

        What I hope that they will not do is destroy the paper grades, leaving only the profiles. Because according to the working methods, once an author has been accepted as having made a significant contribution to a paper, and with less than 11 authors that is automatically so, then the paper is graded on its own merits. So if I as an author in a post-1992 university, submit a paper as do my co-authors in a Russell group university, it should receive the same grade in both submissions. That should be auditable and it should be audited.

  2. […] of papers have been voiced repeatedly. Most recently in Philip Moriarty’s post at IOP and at Telescoper’s blog. I agree with both […]

  3. The real objection to REF is its massive inefficiency: in my discipline (psychology) you can get closely similar results for the last RAE by just computing a departmental H-index – I did this for around 40 departments in an afternoon.
    I think we academics are our own worst enemies: we should not have gone along with this system. It is diverting massive amounts of time and money from the academic activities we should be engaged in.

  4. PS. White text on black background for a blog is a REALLY bad idea.

  5. I wonder if the math stands up: you reduce the number of papers to be read from 6400 to 640. If 6400 is correct this means 20 papers need to be read in one day, every day.

    • telescoper Says:

      20 people on panel, each of 6400 papers read by two of them. Each panel member therefore reads 640 in total.

  6. […] Coles, who writes the Telescoper blog, has written a new post called Counting for the REF. I won’t say much about this post as you can read it for yourself, but I agree with much of […]

  7. peter:

    i agree that the REF seems a waste of resources – how many of us are on internal/external “mock-REF” panels trying to double-guess what the actual panel will do?

    the main driver of this panic is the strong gradient in the funding. as you point out – instead of rewarding 3* and 4* papers with modestly different resource (sensible in view of the significant uncertainty in allocating an output to one grade or the other) – someone has decided on a very steep reward gradient. i can’t believe this was done for self-interest, given that any sensible player will appreciate the scatter in grades which the process will produce.

    i also believe that the panel will be forced to use citation data in an attempt to judge “quality” and “impact” of papers outside of their specific field (hopefully in astronomy they’ll use ADS given its relative completeness compared to some of the other commercial databases).

    anyway – just be glad that neither of us are on the panel.


    (and i like white-on-black)

    • Bryn Jones Says:

      Yes, I agree that the very great difference in funding between grades 4* and 3* causes significant problems, magnifying the effects of noise or of slight biases. It’s a really silly weighting system.

    • John Peacock Says:


      As you know, I’m on the physics panel, but am also being sucked into local mock REF exercises. This is giving me valuable training, I suppose – but it just leaves me depressed at the amount of wasted effort. With the input of a large number of people, Edinburgh will produce a set of * scores that are basically as good as you can get – certainly in a rank sense. All that REF needs to do is to take these lists that we are all producing and interleave them with a modest amount of cross-calibration. The result would be better than the one that the panel will eventually achieve, at a fraction of the effort. But all these carefully-assembled ratings will be thrown away.

      The reason we’re all doing this is because of one of the most moronic aspects of the whole REF: the freedom to try to game the system by optimizing who you declare. I’d really like to know who inflicted this needless pain on us. Just declaring all staff would have reduced by an order of magnitude the work incurred by everyone (apart from the panel members, but as things stand, more integrated time will have been wasted by non-members).

      I like white on black too. Plus it must save energy.

      • Bryn Jones Says:

        If departments did boycott the REF, they would get no REF-based research funding from the higher education funding councils. Those departments would also fall to the bottom of subject research league tables compiled by magazines and newspapers, which in turn feeds into university league tables affecting student recruitment. It’s not a place any academics would want to go unless they already do badly under the REF (such as in teaching-only universities).

        So boycotts are to be avoided at all costs.

  8. telescoper Says:

    Yes indeed, thanks. Now corrected.

  9. Dave Carter Says:

    Well to some extent. If I put a paper on arXiv it is the version accepted by MNRAS, not the version after Wiley-Blackwell have finished with it. They have lost that contract now I understand.

  10. Dave Carter Says:

    Of course in that situation you can submit that paper to the REF, there are less than 11 authors and you are deemed to have made a significant contribution.

  11. Dave Carter Says:

    John might like to comment here, or he may feel constrained not to, but if I were on a panel which I am not, I would be quite offended at the suggestion that I could not give a critical reading to 640 papers in a year. Thats two a day. If you do three a day you can have weekends off. I am assuming that HEFCE in some way buys out the time of the panel members, so that they have no other teaching or admin responsibilities for 2014.

    • Dave Carter Says:

      It wasn’t really satire, but it was more in hope than expectation. In my view membership of a REF panel is such a responsibility, with far reaching and long lasting consequences (much more so that membership of an RCUK grants panel for instance) that I was hoping that the members might have been given the resources to focus fully on this task.

  12. John Peacock Says:


    “I am assuming that HEFCE in some way buys out the time of the panel members”

    Dream on.

    As it turns out, my term as head of astronomy in Edinburgh finishes in Sept 2013. An admin responsibility of that size would be hard (for me) to juggle with REF, but I believe some panel members will be sticking with big admin responsibilities – so I was lucky with the timing. And I don’t know anyone who’s getting out of teaching by being part of REF.

  13. […] Higher education institutions have already responded to a survey inviting them to indicate their intentions to return researchers to the REF, and a summary of the findings can be found here. This suggests that, across Main Panel A (which includes UoA A3 for Allied Health Professions, Dentistry, Nursing and Pharmacy) 2% fewer people will be returned than were returned in RAE 2008. So let’s assume a uniform 2% drop in outputs across all of Main Panel A’s UoAs compared with RAE 2008, which (based on the 12,598 figure above) suggests a total return to UoA A3′s expert reviewers of some 12,346 individual outputs. That’s 12,346 journal articles, book chapters, reports to funding bodies (and so on) to be read and quality graded by a panel of 38 people. Assuming each output is considered by two panel members then each person will have around 650 items to consider, throughout the period from January to December 2014. For a cross-panel comparison, I note that this is a figure remarkably close to the 640 items the blogging physicist Peter Coles estimates will be read and reviewed by mem…. […]

  14. rachelbok Says:

    Reblogged this on repository.

  15. This is a very poorly argued post. From the fact that Elsevier provides data, it does not follow that “The involvement of a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry”. What it demonstrates is that, if a government-driven scheme wishes to acquire data from Elsevier, Elsevier will oblige. Their ‘involvement’ is as supplier, not driver.

    You go on to argue that (1) “It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives” and (2) “It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are”. Simply for the sake of argument, let’s assume these two propositions are true: what have they to do with the REF?

    One further point: open access publishing (and open archives) may offer advantages to academic authors. In general, however, it is not “free of charge”. Free of charge to the reader, yes: free of charge (full stop), no.

    Your overall conclusion (by which I mean your final paragraph) seems to me broadly correct. However, it is logically independent of the propositions identified above.

    • telescoper Says:

      What the two propositions have to do with the REF is that, at least in my field, the REF only considers publications that appear in traditional journals. Without the REF we could just put our papers on the arXiv and cut out the journals entirely.

      Currently academics write the papers and referee them for free for the benefit of publishing houses. We also support online depositories such as the arXiv. All this is funded one way or another by the academic community. What I’m arguing is that we should make access to this free instead of having to pay again to retrieve it.

      • I would be grateful if you could clarify a point here. Does the REF documentation say that open access publications will not be recognised? I don’t recall reading that, but it is a year and a half since I read the stuff (so I’m not up to date on the policy) and I assume from your comment above that this is the case. Would be grateful for the reference.

        I should add that I remain perplexed by the logic of your argument. Are you suggesting that only open access publications should be recognised – and that the REF should discount publications that your colleagues choose to submit to traditional (i.e. pay-wall) journals?

      • Dave Carter Says:

        I posted above a list of the types of output which are recognised by REF panel B, according to the REF documentation. There are eight types, of which published papers in peer-reviewed journals are but one. Reports, including confidential reports are another, so open access doesn’t come into it.

      • I cannot find a ‘reply’button’ for your response above (5:42 p.m.) so am posting my response as a new string. I admit to feeling confused by your response. The REF quotation provided above says “the main panel welcomes all forms of output”. I would have thought the word “all” implies that open access papers are as valid as any others?

        Many of the genres listed in the above quotation (e.g. books, conference papers, journal papers) may be published behind a pay wall or open access (or sometimes even both). I can’t yet see, therefore, the grounds for asserting that “open access does not come into it”. It looks to me as if researcher are entitled to submit open access papers and, indeed, believe many intend to do just that (not least because open access can sometimes make it easier to achieve ‘impact’).

      • Dave Carter Says:

        Sorry Anthony I did not phrase that very well, by “open access does not come into it” I meant that whether a paper or other output is open access or not does not affect whether it is eligible for submission to the REF.

      • telescoper Says:

        This is true for the current REF, but that may change in future manifestations.

    • telescoper Says:

      I note that you are writing on behalf of an academic publishing house. For the record, I will declare your vested interest in this discussion, as you seem to have forgotten to do so.

      • Thank you for this reply, but I have to say candidly that I think you’re being plain silly here. The publishing house that I run (The Professional and Higher Partnership) is a micro-publisher that publishers a handful of books per year. It does not publish journals. Most of our books are how-to books, not research publications. Our sole monograph is written by an author at an American university, who is therefore not included in the REF. Struggling to see what the vested interest is that I should be declaring. And suggest it might be better to focus on the logic of the arguments, which is in any case not affected by questions of interest (real or imaginary).

      • telescoper Says:

        Oooh! Touchy.

        I must confess that I just looked at the name of the outfit you represent “The Professional and Higher Partnership” and assumed that described its activities. I apologize for this error, which you have now clarified.

  16. Reply to your response (6:10 p.m.). i suggest that if readers take the trouble to treat your post (and comments) seriously enough to engage with them, you ought to reciprocate. I suggest that “Oooh! Touchy” doesn’t quite cut it. In fact, I suggest it’s infantile.

    So let me provide some guidance. Earlier you presumptuously declared on my behalf a vested interest. Now it would be in order for you to retract that statement.

  17. […] have a huge effect on funding allocations by HEFCE for at least the next 5 years. The process has many detractors, though most might grudgingly admit that the willingness to submit to this periodic rite […]

  18. […] the way, Pep, it’s worth following the discussion under Peter Coles’ excellent recent post on the working methods associated with the UK’s Research Excellence Framework. (Sidenote: In […]

  19. […] have a huge effect on funding allocations by HEFCE for at least the next 5 years. The process has many detractors, though most might grudgingly admit that the willingness to submit to this periodic rite […]

Comments are closed.

%d bloggers like this: