Archive for Research Excellence Framework

Out of the REF

Posted in Biographical, Cardiff, Maynooth with tags , , , , on November 25, 2020 by telescoper

I was talking over Zoom with some former colleagues from the United Kingdom last week, and was surprised to learn that, despite the Covid-19 pandemic, the 2021 Research Excellence Framework is ploughing ahead next year, only slightly delayed. There’s no stopping bureaucratic juggernauts once they get going…

One of the major plusses of being in Ireland is that, outside the UK academic system, there is no REF. One can avoid the enormous workload and stress generated by this exercise in bean-counting My memories of the last REF in 2014 when I was Head of School at Sussex are quite painful, as it went badly for us then. I hope that the long-term investments we made then will pay off, though, and I hope things turn out better for Sussex this time especially for the Department of Physics & Astronomy for which the impact and environment components of the assessment dragged the overall score down.

The census period for the new REF is 1st August 2013 to 31st July 2020. Not being involved personally in the REF this time round I haven’t really paid much attention to the changes that have been adopted since 2014. One I knew about is that the rules make it harder for institutions to leave staff out of their REF return. Some universities played the system in 2014 by being very selective about whom they put in. Only staff with papers considered likely to be rated top-notch were submitted.

Having a quick glance at the documents I see two other significant differences.

One is that in 2014, with very few exceptions, all staff had to submit four research outputs (i.e. papers) to be graded. in 2021 the system is more flexible: the total number of outputs must equal 2.5 times the summed FTE (full-time equivalent) of the unit’s submitted staff, with no individual submitting more than 5 and none fewer than 1 (except in special cases related to Covid-19). Overall, then there will be fewer outputs than before, the multiplier of FTE being 2.5 (2021) instead of 4 (2014). There will still be a lot, of course, so the panels will have a great deal of reading to do. If that’s what they do with the papers. They’ll probably just look up citations…

The other difference relates to staff who have left an institution during the census period. In 2014 the institution to which a researcher moved got all the credit for the publications, while the institution they left got nothing. In 2021, institutions “may return the outputs of staff previously employed as eligible where the output was first made publicly available during the period of eligible employment, within the set number of outputs required.” I suppose this is to prevent the departure of a staff member causing too much damage to the institution they left.

I was wondering about this last point when chatting with friends the other day. I moved institutions twice during the relevant census period, from Sussex to Cardiff and then from Cardiff to Maynooth. In principle, therefore, both former employees could submit my outputs I published while I was there to the 2021 REF. I only published a dozen or so papers while I was at Sussex – the impact of being Head of School on my research productivity was considerable – and none of them are particularly highly cited so I don’t think that Sussex will want to submit any of them, but they could if they wanted to. They don’t have to ask my permission!

I doubt if Cardiff will be worried about my papers. Among other things they have a stack of gravitational wave papers that should all be 4*.

Anyway, thinking about the REF an amusing thought occurred to me about Research Assessment. My idea was to set up a sort of anti-REF (perhaps the Research Inferiority Framework) based not on the best outputs produced by an institutions researchers but on the worst. The institutions producing the highest number of inferior papers could receive financial penalties and get relegated in the league tables for encouraging staff to write too many papers that nobody ever reads or are just plain wrong. My guess is that papers published in Nature might figure even more prominently in this

The Anomaly of Research England

Posted in Politics, Science Politics with tags , , , , on August 16, 2017 by telescoper

The other day I was surprised to see this tweet announcing the impending formation of a new council under the umbrella of the new organisation UK Research & Innovation (UKRI):

These changes are consequences of the Higher Education and Research Act (2017) which was passed at the end of the last Parliament before the Prime Minister decided to reduce the Government’s majority by calling a General Election.

It seems to me that it’s very strange indeed to have a new council called Research England sitting inside an organisation that purports to be a UK-wide outfit without having a corresponding Research Wales, Research Scotland and Research Northern Ireland. The seven existing research councils which will henceforth sit alongside Research England within UKRI are all UK-wide.

This anomaly stems from the fact that Higher Education policy is ostensibly a devolved matter, meaning that England, Wales, Scotland and Northern Ireland each have separate bodies to oversee their universities. Included in the functions of these bodies is the so-called QR funding which is allocated on the basis of the Research Excellence Framework. This used to be administered by the Higher Education Funding Council for England (HEFCE), but each devolved council distributed its own funds in its own way. The new Higher Education and Research Act however abolishes HEFCE and replaces some of its functions into an organisation called the Office for Students, but not those connected with research. Hence the creation of the new `Research England’. This will not only distribute QR funding among English universities but also administer a number of interdisciplinary research programmes.

The dual support system of government funding consists of block grants of QR funding allocated as above alongside targeted at specific projects by the Research Councils (such as the Science and Technology Facilities Council, which is responsible for astronomy, particle physics and nuclear physics research). There is nervousness in England that the new structure will put both elements of the dual support system inside the same organisation, but my greatest concern is that by exlcuding Wales, Scotland and Northern Ireland, English universities will be given an unfair advantage when it comes to interdisciplinary research. Surely there should be representation within UKRI for Wales, Scotland and Northern Ireland too?

Incidentally, the Science and Technology Facilities Council (STFC) has started the process of recruiting a new Executive Chair. If you’re interested in this position you can find the advertisement here. Ominously, the only thing mentioned under `Skills Required’ is `Change Management’.

Stern Response

Posted in Science Politics with tags , , on July 28, 2016 by telescoper

The results of the Stern Review of the process for assessing university research and allocating public funding has been published today. This is intended to inform the way the next Research Excellence Framework (REF) will be run, probably in 2020, so it’s important for all researchers in UK universities.

Here are the main recommendations, together with brief comments from me (in italics):

  1. All research active staff should be returned in the REF. Good in principle, but what is to stop institutions moving large numbers of staff onto teaching-only contracts (which is what happened in New Zealand when such a move was made)?
  2. Outputs should be submitted at Unit of Assessment level with a set average number per FTE but with flexibility for some faculty members to submit more and others less than the average.Outputs are countable and therefore “fewer” rather than “less”. Other than that, having some flexibility seems fair to me as long as it’s not easy to game the system. Looking it more detail at the report it suggests that some could submit up to six and others potentially none, with an average of perhaps two across the UoA. I’m not sure precise  numbers make sense, but the idea seems reasonable.
  3. Outputs should not be portable. Presumably this doesn’t mean that only huge books can be submitted, but that outputs do not transfer when staff transfer. I don’t think this is workable, but that what should happen is that credit for research should be shared between institutions when a researcher moves from one to another.
  4. Panels should continue to assess on the basis of peer review. However, metrics should be provided to support panel members in their assessment, and panels should be transparent about their use. Good. Metrics only tell part of the story.
  5. Institutions should be given more flexibility to showcase their interdisciplinary and collaborative impacts by submitting ‘institutional’ level impact case studies, part of a new institutional level assessment. It’s a good idea to promote interdisciplinarity, but it’s not easy to make it happen…
  6. Impact should be based on research of demonstrable quality. However, case studies could be linked to a research activity and a body of work as well as to a broad range of research outputs. This would be a good move. The existing rules for Impact seem unnecessarily muddled.
  7. Guidance on the REF should make it clear that impact case studies should not be narrowly interpreted, need not solely focus on socio-economic impacts but should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field, and impacts on teaching. Also good.
  8. A new, institutional level Environment assessment should include an account of the institution’s future research environment strategy, a statement of how it supports high quality research and research-related activities, including its support for interdisciplinary and cross-institutional initiatives and impact. It should form part of the institutional assessment and should be assessed by a specialist, cross-disciplinary panel. Seems like a reasonable idea, but a “specialisr cross-disciplinary” panel might be hard to assemble…
  9. That individual Unit of Assessment environment statements are condensed, made complementary to the institutional level environment statement and include those key metrics on research intensity specific to the Unit of Assessment. Seems like a reasonable idea.
  10. Where possible, REF data and metrics should be open, standardised and combinable with other research funders’ data collection processes in order to streamline data collection requirements and reduce the cost of compiling and submitting information. Reasonable, but a bit vague.
  11. That Government, and UKRI, could make more strategic and imaginative use of REF, to better understand the health of the UK research base, our research resources and areas of high potential for future development, and to build the case for strong investment in research in the UK. This sounds like it means more political interference in the allocation of research funding…
  12. Government should ensure that there is no increased administrative burden to Higher Education Institutions from interactions between the TEF and REF, and that they together strengthen the vital relationship between teaching and research in HEIs. I believe that when I see it.

Any further responses (stern or otherwise) are welcome through the comments box!

 

Lognormality Revisited (Again)

Posted in Biographical, Science Politics, The Universe and Stuff with tags , , , , , , , on May 10, 2016 by telescoper

Today provided me with a (sadly rare) opportunity to join in our weekly Cosmology Journal Club at the University of Sussex. I don’t often get to go because of meetings and other commitments. Anyway, one of the papers we looked at (by Clerkin et al.) was entitled Testing the Lognormality of the Galaxy Distribution and weak lensing convergence distributions from Dark Energy Survey maps. This provides yet more examples of the unreasonable effectiveness of the lognormal distribution in cosmology. Here’s one of the diagrams, just to illustrate the point:

Log_galaxy_countsThe points here are from MICE simulations. Not simulations of mice, of course, but simulations of MICE (Marenostrum Institut de Ciencies de l’Espai). Note how well the curves from a simple lognormal model fit the calculations that need a supercomputer to perform them!

The lognormal model used in the paper is basically the same as the one I developed in 1990 with  Bernard Jones in what has turned out to be  my most-cited paper. In fact the whole project was conceived, work done, written up and submitted in the space of a couple of months during a lovely visit to the fine city of Copenhagen. I’ve never been very good at grabbing citations – I’m more likely to fall off bandwagons rather than jump onto them – but this little paper seems to keep getting citations. It hasn’t got that many by the standards of some papers, but it’s carried on being referred to for almost twenty years, which I’m quite proud of; you can see the citations-per-year statistics even seen to be have increased recently. The model we proposed turned out to be extremely useful in a range of situations, which I suppose accounts for the citation longevity:

nph-ref_historyCitations die away for most papers, but this one is actually attracting more interest as time goes on! I don’t think this is my best paper, but it’s definitely the one I had most fun working on. I remember we had the idea of doing something with lognormal distributions over coffee one day,  and just a few weeks later the paper was finished. In some ways it’s the most simple-minded paper I’ve ever written – and that’s up against some pretty stiff competition – but there you go.

Lognormal_abstract

The lognormal seemed an interesting idea to explore because it applies to non-linear processes in much the same way as the normal distribution does to linear ones. What I mean is that if you have a quantity Y which is the sum of n independent effects, Y=X1+X2+…+Xn, then the distribution of Y tends to be normal by virtue of the Central Limit Theorem regardless of what the distribution of the Xi is  If, however, the process is multiplicative so  Y=X1×X2×…×Xn then since log Y = log X1 + log X2 + …+log Xn then the Central Limit Theorem tends to make log Y normal, which is what the lognormal distribution means.

The lognormal is a good distribution for things produced by multiplicative processes, such as hierarchical fragmentation or coagulation processes: the distribution of sizes of the pebbles on Brighton beach  is quite a good example. It also crops up quite often in the theory of turbulence.

I’ll mention one other thing  about this distribution, just because it’s fun. The lognormal distribution is an example of a distribution that’s not completely determined by knowledge of its moments. Most people assume that if you know all the moments of a distribution then that has to specify the distribution uniquely, but it ain’t necessarily so.

If you’re wondering why I mentioned citations, it’s because they’re playing an increasing role in attempts to measure the quality of research done in UK universities. Citations definitely contain some information, but interpreting them isn’t at all straightforward. Different disciplines have hugely different citation rates, for one thing. Should one count self-citations?. Also how do you apportion citations to multi-author papers? Suppose a paper with a thousand citations has 25 authors. Does each of them get the thousand citations, or should each get 1000/25? Or, put it another way, how does a single-author paper with 100 citations compare to a 50 author paper with 101?

Or perhaps a better metric would be the logarithm of the number of citations?

Research Funding – A Modest Proposal

Posted in Education, Science Politics with tags , , , , , on September 9, 2015 by telescoper

This morning, the Minister for Universities, Jo Johnson, made a speech in which, among other things, he called for research funding to be made simpler. Under the current “dual funding” system, university researchers receive money through two main routes: one is the Research Excellence Framework (REF) which leads to so-called “QR” funding allocations made via the Higher Education Funding Council for England (HEFCE); and the other is through research grants which have to be applied for competitively from various sources, including the Seven Research Councils.

Part of the argument why this system needs to be simplified is the enormous expense and administrative burden of the Research Excellence Framework.  Many people have commented to me that although they hate the REF and accept that it’s ridiculously expensive and time-consuming, they didn’t see any alternative. I’ve been thinking about it and thought I’d make a suggestion. Feel free to shoot it down in flames through the box at the end, but I’ll begin with a short introduction.

Those of you old enough to remember will know that before 1992 (when the old `polytechnics’ were given the go-ahead to call themselves `universities’) the University Funding Council – the forerunner of HEFCE – allocated research funding to universities by a simple formula related to the number of undergraduate students. When the number of universities suddenly increased this was no longer sustainable, so the funding agency began a series of Research Assessment Exercises to assign research funds (now called QR funding) based on the outcome. This prevented research money going to departments that weren’t active in research, most (but not all) of which were in the ex-Polytechnics. Over the years the apparatus of research assessment has become larger, more burdensome, and incomprehensibly obsessed with short-term impact of the research. Like most bureaucracies it has lost sight of its original purpose and has now become something that exists purely for its own sake.

It is especially indefensible at this time of deep cuts to university core funding that we are being forced to waste an increasingly large fraction of our decreasing budgets on staff-time that accomplishes nothing useful except pandering to the bean counters.

My proposal is to abandon the latest manifestation of research assessment mania, i.e. the REF, and return to a simple formula, much like the pre-1992 system,  except that QR funding should be based on research student (i.e. PhD student) rather than undergraduate numbers. There’s an obvious risk of game-playing, and this idea would only stand a chance of working at all if the formula involved the number of successfully completed research degrees over a given period .

I can also see an argument  that four-year undergraduate students (e.g. MPhys or MSci students) also be included in the formula, as most of these involve a project that requires a strong research environment.

Among the advantages of this scheme are that it’s simple, easy to administer, would not spread QR funding in non-research departments, and would not waste hundreds of millions of pounds on bureaucracy that would be better spent actually doing research. It would also maintain the current “dual support” system for research, if that’s  a benefit.

I’m sure you’ll point out disadvantages through the comments box!


The Impact of Impact

Posted in Science Politics with tags , on February 18, 2015 by telescoper

Interesting analysis of the 2014 REF results by my colleague Seb Oliver. Among other things, it shows that Physics was the subject in which “Impact had the greatest impact”..

Seb Boyd

 The Impact of Impact

I wrote the following article to explore how Impact in the Research Excellence Framework 2014 (REF2014) affected the average scores of departments (and hence rankings). This produced a “league table” of how strongly impact affected different subjects. Some of the information in this article was used in a THE article by Paul Jump due to come out 00:00 on 19th Feb 2015.  I’ve now also produced ranking tables for each UoA using the standardised weighting I advocate below (see Standardised Rankings).

UoAUnit of AssessmentEffective Weight of GPA

ranking in each sub-profile as %

OutputsImpactEnvir.
9Physics37.938.623.5
23Sociology34.138.627.3
10Mathematical Sciences37.637.524.9
24Anthropology and Development Studies40.235.024.8
6Agriculture, Veterinary and Food Science42.033.025.0
31Classics43.332.624.0
16Architecture, Built Environment and Planning48.631.120.3

View original post 1,558 more words

A whole lotta cheatin’ going on? REF stats revisited

Posted in Education, Science Politics with tags , , , on January 28, 2015 by telescoper

Here’s a scathing analysis of Research Excellence Framework. I don’t agree with many of the points raised and will explain why in a subsequent post (if and when I get the time), but I reblogging it here in the hope that it will provoke some comments either here or on the original post (also a wordpress site).

coasts of bohemia

 

1.

The rankings produced by Times Higher Education and others on the basis of the UK’s Research Assessment Exercises (RAEs) have always been contentious, but accusations of universities’ gaming submissions and spinning results have been more widespread in REF2014 than any earlier RAE. Laurie Taylor’s jibe in The Poppletonian that “a grand total of 32 vice-chancellors have reportedly boasted in internal emails that their university has become a top 10 UK university based on the recent results of the REF”[1] rings true in a world in which Cardiff University can truthfully[2]claim that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise” from 22nd in RAE2008. Cardiff ranks 5th among universities in the REF2014 “Table of Excellence,” which is based on the GPA of the scores assigned by the REF’s “expert panels” to the three…

View original post 2,992 more words

Lognormality Revisited

Posted in Biographical, Science Politics, The Universe and Stuff with tags , , , , , on January 14, 2015 by telescoper

I was looking up the reference for an old paper of mine on ADS yesterday and was surprised to find that it is continuing to attract citations. Thinking about the paper reminds me off the fun time I had in Copenhagen while it was written.   I was invited there in 1990 by Bernard Jones, who used to work at the Niels Bohr Institute.  I stayed there several weeks over the May/June period which is the best time of year  for Denmark; it’s sufficiently far North (about the same latitude as Aberdeen) that the summer days are very long, and when it’s light until almost midnight it’s very tempting to spend a lot of time out late at night..

As well as being great fun, that little visit also produced what has turned out to be  my most-cited paper. In fact the whole project was conceived, work done, written up and submitted in the space of a couple of months. I’ve never been very good at grabbing citations – I’m more likely to fall off bandwagons rather than jump onto them – but this little paper seems to keep getting citations. It hasn’t got that many by the standards of some papers, but it’s carried on being referred to for almost twenty years, which I’m quite proud of; you can see the citations-per-year statistics even seen to be have increased recently. The model we proposed turned out to be extremely useful in a range of situations, which I suppose accounts for the citation longevity:

lognormal

I don’t think this is my best paper, but it’s definitely the one I had most fun working on. I remember we had the idea of doing something with lognormal distributions over coffee one day,  and just a few weeks later the paper was  finished. In some ways it’s the most simple-minded paper I’ve ever written – and that’s up against some pretty stiff competition – but there you go.

Picture1

The lognormal seemed an interesting idea to explore because it applies to non-linear processes in much the same way as the normal distribution does to linear ones. What I mean is that if you have a quantity Y which is the sum of n independent effects, Y=X1+X2+…+Xn, then the distribution of Y tends to be normal by virtue of the Central Limit Theorem regardless of what the distribution of the Xi is  If, however, the process is multiplicative so  Y=X1×X2×…×Xn then since log Y = log X1 + log X2 + …+log Xn then the Central Limit Theorem tends to make log Y normal, which is what the lognormal distribution means.

The lognormal is a good distribution for things produced by multiplicative processes, such as hierarchical fragmentation or coagulation processes: the distribution of sizes of the pebbles on Brighton beach  is quite a good example. It also crops up quite often in the theory of turbulence.

I’ll mention one other thing  about this distribution, just because it’s fun. The lognormal distribution is an example of a distribution that’s not completely determined by knowledge of its moments. Most people assume that if you know all the moments of a distribution then that has to specify the distribution uniquely, but it ain’t necessarily so.

If you’re wondering why I mentioned citations, it’s because it looks like they’re going to play a big part in the Research Excellence Framework, yet another new bureaucratical exercise to attempt to measure the quality of research done in UK universities. Unfortunately, using citations isn’t straightforward. Different disciplines have hugely different citation rates, for one thing. Should one count self-citations?. Also how do you aportion citations to multi-author papers? Suppose a paper with a thousand citations has 25 authors. Does each of them get the thousand citations, or should each get 1000/25? Or, put it another way, how does a single-author paper with 100 citations compare to a 50 author paper with 101?

Or perhaps the REF panels should use the logarithm of the number of citations instead?

That Was The REF That Was..

Posted in Finance, Science Politics with tags , , , , , , on December 18, 2014 by telescoper

I feel obliged to comment on the results of the 2014 Research Excellence Framework (REF) that were announced today. Actually, I knew about them yesterday but the news was under embargo until one minute past midnight by which time I was tucked up in bed.

The results for the two Units of Assessment relevant to the School of Mathematical and Physical Sciences are available online here for Mathematical Sciences and here for Physics and Astronomy.

To give some background: the overall REF score for a Department is obtained by adding three different components: outputs (quality of research papers); impact (referrring to the impact beyond academia); and environment (which measures such things as grant income, numbers of PhD students and general infrastructure). These are weighted at 65%, 20% and 15% respectively.

Scores are assigned to these categories, e.g. for submitted outputs (usually four per staff member) on a scale of 4* (world-leading), 3* (internationally excellent), 2* (internationally recognised), 1* (nationally recognised) and unclassified and impact on a scale 4* (outstanding), 3* (very considerable), 2* (considerable), 1* (recognised but modest) and unclassified. Impact cases had to be submitted based on the number of staff submitted: two up to 15 staff, three between 15 and 25 and increasing in a like manner with increasing numbers.

The REF will control the allocation of funding in a manner yet to be decided in detail, but it is generally thought that anything scoring 2* or less will attract no funding (so the phrase “internationally recognised” really means “worthless” in the REF, as does “considerable” when applied to impact). It is also thought likely that funding will be heavily weighted towards 4* , perhaps with a ratio of 9:1 between 4* and 3*.

We knew that this REF would be difficult for the School and our fears were born out for both the Department of Mathematics or the Department of Physics and Astronomy because both departments grew considerably (by about 50%) during the course of 2013, largely in response to increased student numbers. New staff can bring outputs from elsewhere, but not impact. The research underpinning the impact has to have been done by staff working in the institution in question. And therein lies the rub for Sussex…

To take the Department of Physics and Astronomy, as an example, last year we increased staff numbers from about 23 to about 38. But the 15 new staff members could not bring any impact with them. Lacking sufficient impact cases to submit more, we were obliged to restrict our submission to fewer than 25. To make matters worse our impact cases were not graded very highly, with only 13.3% of the submission graded 4* and 13.4% graded 3*.

The outputs from Physics & Astronomy at Sussex were very good, with 93% graded 3* or 4*. That’s a higher fraction than Oxford, Cambridge, Imperial College and UCL in fact, and with a Grade Point Average of 3.10. Most other departments also submitted very good outputs – not surprisingly because the UK is actually pretty good at Physics – so the output scores are very highly bunched and a small difference in GPA means a large number of places in the rankings. The impact scores, however, have a much wider dispersion, with the result that despite the relatively small percentage contribution they have a large effect on overall rankings. As a consequence, overall, Sussex Physics & Astronomy slipped down from 14th in the RAE to 34th place in the REF (based on a Grade Point Average). Disappointing to say the least, but we’re not the only fallers. In the 2008 RAE the top-rated physics department was Lancaster; this time round they are 27th.

I now find myself in a situation eerily reminiscent of that I found myself facing in Cardiff after the 2008 Research Assessment Exercise, the forerunner of the REF. Having been through that experience I’m a hardened to disappointments and at least can take heart from Cardiff’s performance this time round. Spirits were very low there after the RAE, but a thorough post-mortem, astute investment in new research areas, and determined preparations for this REF have paid dividends: they have climbed to 6th place this time round. That gives me the chance not only to congratulate my former colleagues there for their excellent result but also to use them as an example for what we at Sussex have to do for next time. An even more remarkable success story is Strathclyde, 34th in the last RAE and now top of the REF table. Congratulations to them too!

Fortunately our strategy is already in hand. The new staff have already started working towards the next REF (widely thought to be likely to happen in 2020) and we are about to start a brand new research activity in experimental physics next year. We will be in a much better position to generate research impact as we diversify our portfolio so that it is not as strongly dominated by “blue skies” research, such as particle physics and astronomy, for which it is much harder to demonstrate economic impact.

I was fully aware of the challenges facing Physics & Astronomy at Sussex when I moved here in February 2013, but with the REF submission made later the same year there was little I could do to alter the situation. Fortunately the University of Sussex management realises that we have to play a long game in Physics and has been very supportive of our continued strategic growth. The result of the 2014 REF result is a setback but it does demonstrate that the stategy we have already embarked upon is the right one.

Roll on 2020!

Anthem for Doomed Academics

Posted in Poetry, Science Politics with tags , , on December 17, 2014 by telescoper

Well, not long now until the announcement of the results of the 2014 Research Excellence Framework are known publicly. I’ll post something in the way of a personal reflection tomorrow, as long as I haven’t thrown myself off Brighton Pier by then. In the meantime, I couldn’t resist sharing this brilliant parody of Wilfred Owen I found via Twitter…

Stumbling with Confidence

(This has been written as the momentous results of the Research Excellence Framework, known to all and sundry as the dreaded REF, are about to be announced, and as careers hang in the balance depending on who are the winners and losers.)

Anthem for Doomed Academics

(with apologies to Wilfred Owen)

What lasting hell for these who try as authors?
Only the monstrous anger of the dons.
Only the stuttering academic’s crippled cursor
Can patter out career horizons.
No metrics now for them; no citations nor reviews;
Nor any voice of warning save the choirs, –
The shrill, demented choirs of wailing peers;
And lost opportunities calling them from sad HEIs.
What meetings may be held to speed them all?
Not in the hand of managers but in their eyes
Shall shine the unholy glimmers of goodbyes.
The cost of student fees shall be their pall;
Their inheritance the frustrations…

View original post 11 more words