Archive for Research Excellence Framework

The Stifling Effect of REF Impact

Posted in Education, Politics with tags , , , on April 22, 2014 by telescoper

Well, I’m back to civilization (more or less) and with my plan to watch a day of cricket at Sophia Gardens thwarted by the rain I decided to pop into an internet café and do a quick post about one of the rants that has been simmering on the back burner while I’ve been taking a break.

Just before the Easter vacation I had lunch with some colleagues from the Department of Physics & Astronomy at the University of Sussex. One of the things that came up was the changing fortunes of the department. After years of under-investment from the University administration,  it was at one time at such a low ebb  that it was in real danger of being closed down (despite its undoubted strengths in research and teaching).  Fortunately help came in the form of SEPnet, which provided funds to support new initiatives in Physics not only in Sussex but across the South East. Moreover, the University administration had belatedly realized that a huge part of the institutional standing in tables of international research rankings was being generated by the Department of Physics & Astronomy. In the nick of time, the necessary resources were invested and the tide was turned and there has been steady growth in staff and student numbers since.

As Head of the School of Mathematical and Physical Sciences I have had to deal with the budget for the Department of Physics & Astronomy. Just a decade ago very few physics departments in the UK were  financially solvent and most had to rely on generous subsidies from University funds to stay open. Those that did not receive such support were closed down, a fate which Sussex narrowly avoided but which befell, for example, the physics departments at Reading and Newcastle.

As I blogged about previously, the renaissance of Sussex physics seems not to be unique. Admissions to physics departments across the country are growing at a healthy rate, to the extent that new departments are being formed at, e.g. Lincoln and Portsmouth. None of this could have been imagined just ten years ago.

So will this new-found optimism be reflected in the founding of even more new physics departments? One would hope so, as I think it’s a scandal that there are only around 40 UK universities with physics departments. Call me old-fashioned but I think a university without a physics department is not a university at all. Thinking about this over the weekend however I realized that any new physics department is going to have grave problems under the system of allocating research funding known as the Research Excellence Framework.

A large slice (20%) of the funding allocated by the 2014 REF will be based on “Impact” which, roughly speaking, means the effect the research can be demonstrated to have had outside the world of academic research. This isn’t the largest component – 65% is allocated on the basic of the quality of “Outputs” (research papers etc) – but is a big chunk and will probably be very important in determining league table positions. It is probably going to be even larger in future versions of the REF.

Now here’s the rub. When an academic changes institution (as I have recently done, for example) he/she can take his/her outputs to the new institution. Thus, papers I wrote while at Cardiff could be submitted to the REF from Sussex. This is not the case with “impact”. The official guidance on submissions states:

Impact: The sub-panels will assess the ‘reach and significance’ of impacts on the economy, society and/or culture that were underpinned by excellent research conducted in the submitted unit, as well as the submitted unit’s approach to enabling impact from its research. This element will carry a weighting of 20 per cent.

The emphasis is mine.

The period during which the underpinning research must have been published is quite generous in length: 1 January 1993 to 31 December 2013. This is clearly intended to recognize the fact that some research take a long time to generate measurable impact. The problem is that the underpinning research must have been done within the submitting unit; it can’t be brought in from elsewhere. If the unit is new and did not exist for most of this period,then it is much harder to generate impact no matter how brilliant the staff it recruits. Any new departments in physics, or any other subject for that matter, will have to focus on research that can generate impact very rapidly indeed if it is to compete in the next REF, expected in 2018 or thereabouts. That is a powerful disincentive for universities to invest in research that may take many years to come to fruition. Five years is a particularly short time in experimental physics.

 

 

Six (very) bad things about the REF

Posted in Education, Science Politics with tags , , , , on November 22, 2013 by telescoper

I see that Jon Butterworth has written a piece on the Grauniad website, entitled Six good things about the REF, the REF in question not being a black-clad figure of questionable parentage and visual acuity responsible for supervising a game of association football, but the Research Excellence Framework.

I agree with some of Jon’s comments and do believe that past Research Assessment Exercises have generally raised the quality of research in UK universities. I do however think that there are some very bad things about the way the REF is being implemented, and that these far outweigh the positives Jon mentions. In the interest of balance, therefore, I thought I’d respond with a list of six (very) bad things about the REF, and particularly how it applies to physics. I’ll keep them brief because I’ve blogged about most of them before:

  1. The rules positively encouraged universities to play games with selectivity. This is absurd. All academic staff on teaching and research contracts should be submitted if a true indication of research quality is to be obtained.
  2. The criteria for what constitutes 3* or 4* publications are vague and subjective, leaving everything in the hands of the panels. Worse, all paperwork will be shredded after the panel’s deliberations leaving no possibility for appeal. This absolutely stinks.
  3. How QR funding will be allocated on the basis of the REF is not made clear in advance of the submission. Nobody knows how heavily the funding will be skewed towards 4* and 3* submissions. Having encouraged departments to play games, therefore, the REF refuses to disclose the rules. It’s not even clear there will be any QR funding.
  4. The panels will be unable to perform a detailed peer review of submissions simply because there will be too many papers. Each panel will be expected to make decisions on many hundreds of papers, leaving time only for a cursory reading of each.
  5. Limiting the physics submission to 4 papers per person is ridiculous. This corresponds to a tiny fraction of the outputs of a typical physics researcher. If someone has written ten 4* publications in the REF period, why should these not be counted?
  6. Impact counts for a sizable fraction (20%) of the funding, but the rules governing what counts as “impact” are absurdly restrictive and clearly encourage short-term commercially-oriented boilerplate stuff at the expense of genuine long-term “blue skies” research.

 

Well, I got to six in just a few minutes and could easily get to sixty, but that will do for now. Perhaps you’d like to contribute your own bad things through the comments box?

The Dark Side of the REF

Posted in Finance, Science Politics with tags , , , , , , , , on August 8, 2013 by telescoper

There’s a disturbing story in the latest Times Higher which argues that the University of Leicester has apparently reneged on a promise that non-submission to the forthcoming (2014)  Research Excellence Framework (REF) would not have negative career consequences. They have now said that except in exceptional circumstances, non-submitted academics will either be moved to a teaching-only contract (where there is a vacancy and they can demonstrate teaching excellence), or have their performance “managed”, with the threat of sacking if they don’t meet the specified targets.  I’d heard rumours of this on the grapevine (i.e. Twitter) before the Times Higher story was published. It’s very worrying to have it confirmed, as it raises all kinds of questions about what might happen in departments that turn out to have disappointing REF results .

There are (at least) two possible reasons for non-inclusion of the outputs of a researcher and it is important to distinguish between them. One is that the researcher hasn’t enough high-quality outputs to submit. In the absence of individual extenuating circumstances, researchers are expected to submit four “outputs” (in my discipline that means “research papers”) for assessment. That’s a pretty minimal level of productivity, actually;  such a number per year is a reasonable average for an active researcher in my field.  A person employed on a contract that specifies their duties as Teaching and Research may therefore be under-performing  if they can’t produce four papers over the period 2008-2013. I think some form of performance management  may be justifiable in this case, but the primary aim should be to help the individual rather than show them the door. We all have fallow periods in research, and it’s not appropriate to rush to sack anyone who experiences a lean time.   Andrew Wiles would have been considered `inactive’ had there been a REF in 1992 as he hadn’t published anything for years. Then he produced a proof of Fermat’s Last Theorem. Some things just take time.

A second reason for excluding researcher from the REF is that the institution concerned may be making a tactical submission. As the Times Higher article explains:

The memo suggests that academics would be spared repercussions if, among other reasons, the number of individuals submitted is “constrained” by the volume of case studies their department intends to enter to demonstrate research impact.

Institutions must submit one case study for every 10 scholars entered.

Maria Nedeva, professor of science and innovation dynamics and policy at Manchester Business School, said the tactic of deciding how many academics to submit based on impact case study numbers was “rife”.

(Incidentally, the second paragraph is not quite right. The number of case studies required depends on the number of staff submitted as follows: for fewer than 15 staff , TWO case studies;  for 15-24.99 staff it is THREE case studies – and then for each additional ten members of staff entered a further case study is required.)

e case study for every scholars included plus one, i.e. forThe statement at the end of the quote there is in line with my experience too.  The point is that the REF is not just a means of allocating relatively small amounts of so-called `QR’ research funding . Indeed, it remains entirely possible that no funding at all will be allocated following the 2014 exercise. The thinking then is that the number of staff submitted is largely irrelevant; all that will count is league table position.

This by no means the only example of the dangers that lurk when you take league tables too seriously.

If a department is required to submit, say, four impact cases if all staff are included in the REF submission, but only has three viable ones, it would not be unreasonable to submit fewer staff because their overall would be dragged down by a poor impact case even if the output quality of all staff is high.  There will certainly be highly active researchers in UK institutions, including many who hold sizable external research grants, whose outputs are not submitted to the REF. As the article points out, it would be very wrong for managers to penalize scholars who have been excluded because of this sort of game-playing. That’s certainly not going to happen in the School of Mathematical and Physical Sciences at Sussex University.  Not while I’m Head of School, anyway.

Moreover, even researchers whose “outputs” are not selected may still contribute to the “Environment” and/or “Impact” sections so they still, in a very real sense, do participate in their department’s REF submission.

My opinion? All this silliness could easily have been avoided by requiring all staff in all units of assessment to be submitted by all departments. You know, like would have happened if the system were actually designed to identify and reward research excellence. Instead, it’s yet another example of a bureaucratic machine that’s become entirely self-serving. It exists simply because it exists.  Research would be much better off without it.

Counting for the REF

Posted in Open Access, Science Politics with tags , , , , , , on April 20, 2013 by telescoper

It’s a lovely day in Brighton and I’m once again on campus for an Admissions Event at Sussex University, this time for the Mathematics Department in the School of Mathematical and Physical Sciences.  After all the terrible weather we’ve had since I arrived in February, it’s a delight and a relief to see the campus at its best for today’s crowds. Anyway, now that I’ve finished my talk and the subsequent chats with prospective students and their guests I thought I’d do a quick blogette before heading back home and preparing for this evenings Physics & Astronomy Ball. It’s all go around here.

What I want to do first of all is to draw attention to a very nice blog post by a certain Professor Moriarty who, in case you did not realise it, dragged himself away from his hiding place beneath the Reichenbach Falls and started a new life as Professor of Physics at Nottingham University.  Phil Moriarty’s piece basically argues that the only way to really judge the quality of a scientific publication is not by looking at where it is published, but by peer review (i.e. by getting knowledgeable people to read it). This isn’t a controversial point of view, but it does run counter to the current mania for dubious bibliometric indicators, such as journal impact factors and citation counts.

The forthcoming Research Excellence Framework involves an assessment of the research that has been carried out in UK universities over the past five years or so, and a major part of the REF will be the assessment of up to four “outputs” submitted by research-active members of staff over the relevant period (from 2008 to 2013). reading Phil’s piece might persuade you to be happy that the assessment of the research outputs involved in the REF will be primarily based on peer review. If you are then I suggest you read on because, as I have blogged about before, although peer review is fine in principle, the way that it will be implemented as part of the REF has me deeply worried.

The first problem arises from the scale of the task facing members of the panel undertaking this assessment. Each research active member of staff is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The panel comprises 20 members.

As a rough guess let’s assume that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty  close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There is some uncertainty in these figures because (a) there is plenty of evidence that departments are going to be more selective in who is entered than was the case in 2008 and (b) some departments have increased their staff numbers significantly since 2008. These two factors work in opposite directions so not knowing the size of either it seems sensible to go with the numbers from the previous round for the purposes of my argument.

There are 20 members of the panel so 6400 papers submitted means that, between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day…

It is therefore blindingly obvious that whatever the panel does do will not be a thorough peer review of each paper, equivalent to refereeing it for publication in a journal. The panel members simply won’t have the time to do what the REF administrators claim they will do. We will be lucky if they manage a quick skim of each paper before moving on. In other words, it’s a sham.

Now we are also told the panel will use their expert judgment to decide which outputs belong to the following categories:

  • 4*  World Leading
  • 3* Internationally Excellent
  • 2* Internationally Recognized
  • 1* Nationally Recognized
  • U   Unclassified

There is an expectation that the so-called QR  funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. The word on the street is that the weighting for 4* will be 9 and that for 3* only 1. “Internationally recognized”  will be regarded as worthless in the view of HEFCE. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?

The panel members will apparently know enough about every single one of the papers they are going to read in order to place them  into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not. The steep increase in weighting between 3* and 4* means that this judgment could mean a drop of funding that could spell closure for a department.

We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by   Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals.  No doubt Elsevier are  on a nice little earner peddling meaningless data for the HECFE bean-counters, but I have no confidence that they will add any value to the assessment process.

There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs”  are published, including a pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.

I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.

The involvement of  a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.

Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used;  we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors?  All the documentation is shredded after the results are published. Who can trust such a system?

To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become  entirely self-serving. It is imposing increasingly  ridiculous administrative burdens on researchers, inventing increasingly  arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.

And that’s all just about “outputs”. I haven’t even started on “impact”….

REF moves the goalposts (again)

Posted in Bad Statistics, Education, Science Politics with tags , , , on January 18, 2013 by telescoper

The topic of the dreaded 2014 Research Excellence Framework came up quite a few times in quite a few different contexts over the last few days, which reminded me that I should comment on a news item that appeared a week or so ago.

As you may or may not be aware, the REF is meant to assess the excellence of university departments in various disciplines and distribute its “QR” research funding accordingly.  Institutions complete submissions which include details of relevant publications etc and then a panel sits in judgement. I’ve already blogged of all this: the panels clearly won’t have time to read every paper submitted in any detail at all, so the outcome is likely to be highly subjective. Moreover, HEFCE’s insane policy to award the bulk of its research funds to only the very highest grade (4* – “internationally excellent”) means that small variations in judged quality will turn into enormous discrepancies in the level of research funding. The whole thing is madness, but there seems no way to inject sanity into the process as the deadline for submissions remorselessly approaches.

Now another wrinkle has appeared on the already furrowed brows of those preparing REF submissions. The system allows departments to select staff to be entered; it’s not necessary for everyone to go in. Indeed if only the very best researchers are entered then the typical score for the department will be high, so it will appear  higher up  in the league tables, and since the cash goes primarily to the top dogs then this might produce almost as much money as including a few less highly rated researchers.

On the other hand, this is a slightly dangerous strategy because it presupposes that one can predict which researchers and what research will be awarded the highest grade. A department will come a cropper if all its high fliers are deemed by the REF panels to be turkeys.

In Wales there’s something that makes this whole system even more absurd, which is that it’s almost certain that there will be no QR funding at all. Welsh universities are spending millions preparing for the REF despite the fact that they’ll get no money even if they do stunningly well. The incentive in Wales is therefore even stronger than it is in England to submit only the high-fliers, as it’s only the position in the league tables that will count.

The problem with a department adopting the strategy of being very selective is that it could have a very  negative effect on the career development of younger researchers if they are not included in their departments REF submission. As well as taking the risk that people who manage to convince their Head of School that they are bound to get four stars in the REF may not have the same success with the various grey eminences who make the decision that really matters.

Previous incarnations of the REF (namely the Research Assessment Exercises of 2008 and 2001) did not publish explicit information about exactly how many eligible staff were omitted from the submissions, largely because departments were extremely creative in finding ways of hiding staff they didn’t want to include.

Now however it appears there are plans that the Higher Education Statistics Agency (HESA) will publish its own figures on how many staff it thinks are eligible for inclusion in each department. I’m not sure how accurate these figures will be but they will change the game, in that they will allow compilers of league tables to draw up lists of the departments that prefer playing games to   just allowing the REF panels to  judge the quality of their research.

I wonder how many universities are hastily revising their submission plans in the light of this new twist?

Reffing Madness

Posted in Science Politics with tags , , , , , , , , , , on June 30, 2012 by telescoper

I’m motivated to make a quick post in order to direct you to a blog post by David Colquhoun that describes the horrendous behaviour of the management at Queen Mary, University of London in response to the Research Excellence Framework. It seems that wholesale sackings are in the pipeline there as a result of a management strategy to improve the institution’s standing in the league tables by “restructuring” some departments.

To call this strategy “flawed” would be the understatement of the year. Idiotic is a far better word.  The main problem being that the criteria being applied to retain or dismiss staff bear no obvious relation to those adopted by the REF panels. To make matters worse, Queen Mary has charged two of its own academics with “gross misconduct” for having the temerity to point out the stupidity of its management’s behaviour. Read on here for more details.

With the deadline for REF submissions fast approaching, it’s probably the case that many UK universities are going into panic mode, attempting to boost their REF score by shedding staff perceived to be insufficiently excellent in research and/or  luring  in research “stars” from elsewhere. Draconian though the QMUL approach may seem, I fear it will be repeated across the sector.  Clueless university managers are trying to guess what the REF panels will think of their submissions by staging mock assessments involving external experts. The problem is that nobody knows what the actual REF panels will do, except that if the last Research Assessment Exercise is anything to go by, what they do will be nothing like what they said they would do.

Nowhere is the situation more absurd than here in Wales. The purported aim of the REF is to allocated the so-called “QR” research funding to universities. However, it is an open secret that in Wales there simply isn’t going to be any QR money at all. Leighton Andrews has stripped the Higher Education budget bare in order to pay for his policy of encouraging Welsh students to study in England by paying their fees there.

So here we have to enter the game, do the mock assessments, write our meaningless “impact” cases, and jump through all manner of pointless hoops, with the inevitable result that even if we do well we’ll get absolutely no QR money at the end of it. The only strategy that makes sense for Welsh HEIs such as Cardiff University, where I work, is to submit only those researchers guaranteed to score highly. That way at least we’ll do better in the league tables. It won’t matter how many staff actually get submitted, as the multiplier is zero.

There’s no logical argument why Welsh universities should be in the REF at all, given that there’s no reward at the end. But we’re told we have to by the powers that be. Everyone’s playing games in which nobody knows the rules but in which the stakes are people’s careers. It’s madness.

I can’t put it better than this quote:

These managers worry me. Too many are modest achievers, retired from their own studies, intoxicated with jargon, delusional about corporate status and forever banging the metrics gong. Crucially, they don’t lead by example.

Any reader of this blog who works in a university will recognize the sentiments expressed there. But let’s not blame it all on the managers. They’re doing stupid things because the government has set up a stupid framework. There isn’t a single politician in either England or Wales with the courage to do the right thing, i.e. to admit the error and call the whole thing off.

The Transparent Dishonesty of the Research Excellence Framework

Posted in Open Access, Science Politics with tags , , , , , , on May 30, 2012 by telescoper

Some of my colleagues in the School of Physics & Astronomy recently attended a briefing session about the  forthcoming Research Excellence Framework. This, together with the post I reblogged earlier this morning, suggested that I should re-hash an article I wrote some time ago about the arithmetic of the REF, and how it will clearly not do what it says on the tin.

The first thing is the scale of the task facing members of the panel undertaking the assessment. Every research active member of staff in every University in the UK is requested to submit four research publications (“outputs”) to the panel, and we are told that each of these will be read by at least two panel members. The Physics panel comprises 20 members.

As a rough guess I’d say that the UK has about 40 Physics departments, and the average number of research-active staff in each is probably about 40. That gives about 1600 individuals for the REF. Actually the number of category A staff submitted to the 2008 RAE was 1,685.57 FTE (Full-Time Equivalent), pretty close to this figure. At 4 outputs per person that gives 6400 papers to be read. We’re told that each will be read by at least two members of the panel, so that gives an overall job size of 12800 paper-readings. There are 20 members of the panel, so that means that between 29th November 2013 (the deadline for submissions) and the announcement of the results in December 2014 each member of the panel will have to have read 640 research papers. That’s an average of about two a day. Every day. Weekends included.

Now we are told the panel will use their expert judgment to decide which outputs belong to the following categories:

  • 4*  World Leading
  • 3* Internationally Excellent
  • 2* Internationally Recognized
  • 1* Nationally Recognized
  • U   Unclassified

There is an expectation that the so-called QR  funding allocated as a result of the 2013 REF will be heavily weighted towards 4*, with perhaps a small allocation to 3* and probably nothing at all for lower grades. In other words “Internationally recognized” research will probably be deemed completely worthless by HEFCE. Will the papers belonging to the category “Not really understood by the panel member” suffer the same fate?

The panel members will apparently know enough about every single one of the papers they are going to read in order to place them  into one of the above categories, especially the crucial ones “world-leading” or “internationally excellent”, both of which are obviously defined in a completely transparent and objective manner. Not.

We are told that after forming this judgement based on their expertise the panel members will “check” the citation information for the papers. This will be done using the SCOPUS service provided (no doubt at considerable cost) by   Elsevier, which by sheer coincidence also happens to be a purveyor of ridiculously overpriced academic journals. No doubt Elsevier are  on a nice little earner peddling meaningless data for the HECFE bean-counters, but I haven’t any confidence that it will add much value to the assessment process.

There have been high-profile statements to the effect that the REF will take no account of where the relevant “outputs”  are published, including a recent pronouncement by David Willetts. On the face of it, that would suggest that a paper published in the spirit of Open Access in a free archive would not be disadvantaged. However, I very much doubt that will be the case.

I think if you look at the volume of work facing the REF panel members it’s pretty clear that citation statistics will be much more important for the Physics panel than we’ve been led to believe. The panel simply won’t have the time or the breadth of understanding to do an in-depth assessment of every paper, so will inevitably in many cases be led by bibliometric information. The fact that SCOPUS doesn’t cover the arXiv means that citation information will be entirely missing from papers just published there.

The involvement of  a company like Elsevier in this system just demonstrates the extent to which the machinery of research assessment is driven by the academic publishing industry. The REF is now pretty much the only reason why we have to use traditional journals. It would be better for research, better for public accountability and better economically if we all published our research free of charge in open archives. It wouldn’t be good for academic publishing houses, however, so they’re naturally very keen to keep things just the way they are. The saddest thing is that we’re all so cowed by the system that we see no alternative but to participate in this scam.

Incidentally we were told before the 2008 Research Assessment Exercise that citation data would emphatically not be used;  we were also told afterwards that citation data had been used by the Physics panel. That’s just one of the reasons why I’m very sceptical about the veracity of some of the pronouncements coming out from the REF establishment. Who knows what they actually do behind closed doors?  All the documentation is shredded after the results are published. Who can trust such a system?

To put it bluntly, the apparatus of research assessment has done what most bureaucracies eventually do; it has become  entirely self-serving. It is imposing increasingly  ridiculous administrative burdens on researchers, inventing increasingly  arbitrary assessment criteria and wasting increasing amounts of money on red tape which should actually be going to fund research.

Follow

Get every new post delivered to your Inbox.

Join 3,984 other followers