Archive for research

Commercially-driven research should be funded by loans, not grants

Posted in Politics, Science Politics with tags , , , on October 27, 2015 by telescoper

I couldn’t resist a very quick comment on an item in yesterday’s Financial Times. The article may be behind a paywall, so here’s a short extract giving the essential point:

Ministers are considering proposals to replace research grants to industry with loans, in a move that business leaders fear would damage Britain’s ability to innovate.

The reason for mentioning this is that I suggested the very same idea on this blog about five years ago. My general point was the logical inconsistency in swapping grants for loans in the case of university students on the grounds that they are the beneficiaries of education and should be able to pay back the investment through earnings, when the same argument is not applied to businesses that profit from university-based research. I wonder if BIS have been reading this blog again?

For what it’s worth I’ll repeat here my personal opinion view that “commercially useful” research should not be funded by the taxpayer through research grants. If it’s going to pay off in the short term it should be funded by private investors or venture capitalists of some sort. Dragon’s Den, even. When the public purse is so heavily constrained, it should only be asked to fund those things that can’t in practice be funded any other way. That means long-term, speculative, curiosity driven, scientific research.

This is pretty much the opposite of what the Treasury seems to have been thinking for the last five years. It wants to concentrate public funds in projects that can demonstrate immediate commercial potential. Taxpayer’s money used in this way ends up in the pockets of entrepreneurs if the research succeeds and, if it doesn’t, the grant has effectively been wasted. My proposal, therefore, is to phase out research grants for groups (either in universities or in business) that want to concentrate on commercially motivated research and replace them with research loans. If the claims they make to secure the advance are justified they should have no problem repaying it from the profits they make from patent income or other forms of exploitation. If not, then they will have to pay back the loan from their own funds (as well as being exposed as bullshit merchants). In the current economic situation the loans could be made at very low interest rates and still save a huge amount of the current research budget for higher education. Indeed after a few years – suggest the loans should be repayable in 3-5 years, it would be self-financing. I think a large fraction of research in the Applied Sciences and Engineering should be funded in this way.

The money saved by replacing grants to commercially driven research groups with loans could be re-invested in those areas where public investment is really needed, such as pure science and medicine. Here grants are needed because the motivation for the research is different. Much of it does, in fact, lead to commercial spin-offs, but that is accidental and likely to appear only in the very long term. The real motivation of doing this kind of research is to enrich the knowledge base of the UK and the world in general. In other words, it’s for the public good. Remember that?

Most of you probably think that this is a crazy idea, but if you do I’ll ask you to now think about how the government funds teaching in universities and ask yourself why research is handled in such a different way.

Research Funding – A Modest Proposal

Posted in Education, Science Politics with tags , , , , , on September 9, 2015 by telescoper

This morning, the Minister for Universities, Jo Johnson, made a speech in which, among other things, he called for research funding to be made simpler. Under the current “dual funding” system, university researchers receive money through two main routes: one is the Research Excellence Framework (REF) which leads to so-called “QR” funding allocations made via the Higher Education Funding Council for England (HEFCE); and the other is through research grants which have to be applied for competitively from various sources, including the Seven Research Councils.

Part of the argument why this system needs to be simplified is the enormous expense and administrative burden of the Research Excellence Framework.  Many people have commented to me that although they hate the REF and accept that it’s ridiculously expensive and time-consuming, they didn’t see any alternative. I’ve been thinking about it and thought I’d make a suggestion. Feel free to shoot it down in flames through the box at the end, but I’ll begin with a short introduction.

Those of you old enough to remember will know that before 1992 (when the old `polytechnics’ were given the go-ahead to call themselves `universities’) the University Funding Council – the forerunner of HEFCE – allocated research funding to universities by a simple formula related to the number of undergraduate students. When the number of universities suddenly increased this was no longer sustainable, so the funding agency began a series of Research Assessment Exercises to assign research funds (now called QR funding) based on the outcome. This prevented research money going to departments that weren’t active in research, most (but not all) of which were in the ex-Polytechnics. Over the years the apparatus of research assessment has become larger, more burdensome, and incomprehensibly obsessed with short-term impact of the research. Like most bureaucracies it has lost sight of its original purpose and has now become something that exists purely for its own sake.

It is especially indefensible at this time of deep cuts to university core funding that we are being forced to waste an increasingly large fraction of our decreasing budgets on staff-time that accomplishes nothing useful except pandering to the bean counters.

My proposal is to abandon the latest manifestation of research assessment mania, i.e. the REF, and return to a simple formula, much like the pre-1992 system,  except that QR funding should be based on research student (i.e. PhD student) rather than undergraduate numbers. There’s an obvious risk of game-playing, and this idea would only stand a chance of working at all if the formula involved the number of successfully completed research degrees over a given period .

I can also see an argument  that four-year undergraduate students (e.g. MPhys or MSci students) also be included in the formula, as most of these involve a project that requires a strong research environment.

Among the advantages of this scheme are that it’s simple, easy to administer, would not spread QR funding in non-research departments, and would not waste hundreds of millions of pounds on bureaucracy that would be better spent actually doing research. It would also maintain the current “dual support” system for research, if that’s  a benefit.

I’m sure you’ll point out disadvantages through the comments box!

The Renewed Threat to STEM

Posted in Education, Finance, Science Politics with tags , , , , , , on July 26, 2015 by telescoper

A couple of years ago, soon after taking over as Head of the School of Mathematical and Physical Sciences (MPS) at the University of Sussex, I wrote a blog post called The Threat to STEM from HEFCE’s Funding Policies about how the funding policies of the Higher Education Funding Council for England (HEFCE) were extremely biased against STEM disciplines. The main complaint I raised then was that the income per student for science subjects does not adequately reflect the huge expense of teaching these subjects compared to disciplines in the arts and humanities. The point is that universities now charge the same tuition fee for all subjects (usually £9K per annum) while the cost varies hugely across disciplines: science disciplines can cost as much as £16K per annum per student whereas arts subjects can cost as little as £6K. HEFCE makes a small gesture towards addressing this imbalance by providing an additional grant for “high cost” subjects, but that is only just over £1K per annum per student, not enough to make such courses financially viable on their own. And even that paltry contribution has been steadily dwindling.  In effect, fees paid by arts students are heavily subsidising the sciences across the Higher Education sector.

The situation was bad enough before last week’s announcement of an immediate £150M cut in HEFCE’s budget. Once again the axe has fallen hardest on STEM disciplines. Worst of all, a large part of the savings will be made retrospectively, i.e. by clawing back money that had already been allocated and which institutions had assumed in order to plan their budgets. To be fair, HEFCE had warned institutions that cuts were coming in 2015/16:

This means that any subsequent changes to the funding available to us from Government for 2015-16, or that we have assumed for 2016-17, are likely to affect the funding we are able to distribute to institutions in the 2015-16 academic year. This may include revising allocations after they have already been announced. Accordingly, institutions should plan their budgets prudently.

However, this warning does not mention the possibility of cuts to the current year (i.e. 2014-15). No amount of prudent planning of budgets will help when funding is taken away retrospectively, as it is now to the case. I should perhaps explain that funding allocations are made by HEFCE in a lagged fashion, based on actual student numbers, so that income for the academic year 2014-15 is received by institutions during 15/16. In fact my institution, in common with most others, operates a financial year that runs from August 1st to July 31st and I’ve just been through a lengthy process of setting the budget from August 1st 2015 onward; budgets are what I do most of the time these days, if I’m honest. I thought I had finished that job for the time being, but look:

In October 2015, we will notify institutions of changes to the adjusted 2014-15 teaching grants we announced in March 20158. These revised grant tables will incorporate the pro rata reduction of 2.4 per cent. This reduction, and any other changes for individual institutions to 2014-15 grant, will be implemented through our grant payments from November 2015. We do not intend to reissue 2014-15 grant tables to institutions before October 2015, but institutions will need to reflect any changes relating to 2014-15 in their accounts for that year (i.e. the current academic year). Any cash repayments due will be confirmed as part of the October announcements.

On top of this, any extra students recruited as as  result of the government scrapping student number controls won’t attract any support at all from HEFCE, so we wll only get the tuition fee.And the government says it wants the number of STEM students to increase? Someone tell me how that makes sense.

What a mess! It’s going to be back to the drawing board for me and my budget. And if a 2.4 per cent cut doesn’t sound much to you then you need to understand it in terms of how University budgets work. It is my job – as the budget holder for MPS – to ensure that the funding that comes in to my School is spent as efficiently and effectively on what the School is meant to do, i.e. teaching and research. To that end I have to match income and expenditure as closely as possible. It is emphatically not the job of the School to make a profit: the target I am given is to return a small surplus (actually 4 per cent of our turnover) to contribute to longer-term investments. I’ve set a budget that does this, but now I’ll have to wait until October to find out how much I have to find in terms of savings to absorb the grant cut. It’s exasperating when people keep moving the goalposts like this. One would almost think the government doesn’t care about the consequences of its decisions, as long as it satisfies its fixation with cuts.

And it’s not only teaching that is going to suffer. Another big slice of savings (£52M) is coming from scrapping the so-called “transitional relief” for STEM departments who lost out as a result of the last Research Excellence Framework. This again is a policy that singles out STEM disciplines for cuts. You can find the previous allocations of transitional relief in an excel spreadsheet here. The cash cuts are largest in large universities with big activities in STEM disciplines – e.g. Imperial College will lose £10.9M previous allocated, UCL about £4.3M, and Cambridge about £4M. These are quite wealthy institutions of course, and they will no doubt cope, but that doesn’t make it any more acceptable for HEFCE to break a promise.

This cut in fact won’t alter my School’s budget either. Although we were disappointed with the REF outcome in terms of league table position, we actually increased our QR income. As an institution the University of Sussex only attracted £237,174 in transitional relief so this cut is small potatoes for us, but that doesn’t make this clawback any more palatable from the point of view of the general state of health of STEM disciplines in the United Kingdom.

These cuts are also directly contrary to the claim that the UK research budget is “ring-fenced”. It clearly isn’t, and with a Comprehensive Spending Review coming up many of us are nervous that these cuts are just a foretaste of much worse things to come. Research Councils are being asked to come up with plans based on a 40% cut in cash.

Be afraid. Be very afraid.

A scientific paper with 5000 authors is absurd, but does science need “papers” at all?

Posted in History, Open Access, Science Politics, The Universe and Stuff with tags , , , , , , , , , on May 17, 2015 by telescoper

Nature News has reported on what appears to be the paper with the longest author list on record. This article has so many authors – 5,154 altogether – that 24 pages (out of a total of 33 in the paper) are devoted just to listing them, and only 9 to the actual science. Not, surprisingly the field concerned is experimental particle physics and the paper emanates from the Large Hadron Collider; it involves combining data from the CMS and ATLAS detectors to estimate the mass of the Higgs Boson. In my own fields of astronomy and cosmology, large consortia such as the Planck collaboration are becoming the exception rather than the rule for observational work. Large ollaborations  have achieved great things not only in physics and astronomy but also in other fields. A for  paper in genomics with over a thousand authors has recently been published and the trend for ever-increasing size of collaboration seems set to continue.

I’ve got nothing at all against large collaborative projects. Quite the opposite, in fact. They’re enormously valuable not only because frontier research can often only be done that way, but also because of the wider message they send out about the benefits of international cooperation.

Having said that, one thing these large collaborations do is expose the absurdity of the current system of scientific publishing. The existence of a paper with 5000 authors is a reductio ad absurdum proof  that the system is broken. Papers simply do not have 5000  “authors”. In fact, I would bet that no more than a handful of the “authors” listed on the record-breaking paper have even read the article, never mind written any of it. Despite this, scientists continue insisting that constributions to scientific research can only be measured by co-authorship of  a paper. The LHC collaboration that kicked off this piece includes all kinds of scientists: technicians, engineers, physicists, programmers at all kinds of levels, from PhD students to full Professors. Why should we insist that the huge range of contributions can only be recognized by shoe-horning the individuals concerned into the author list? The idea of a 100-author paper is palpably absurd, never mind one with fifty times that number.

So how can we assign credit to individuals who belong to large teams of researchers working in collaboration?

For the time being let us assume that we are stuck with authorship as the means of indicating a contribution to the project. Significant issues then arise about how to apportion credit in bibliometric analyses, e.g. through citations. Here is an example of one of the difficulties: (i) if paper A is cited 100 times and has 100 authors should each author get the same credit? and (ii) if paper B is also cited 100 times but only has one author, should this author get the same credit as each of the authors of paper A?

An interesting suggestion over on the e-astronomer a while ago addressed the first question by suggesting that authors be assigned weights depending on their position in the author list. If there are N authors the lead author gets weight N, the next N-1, and so on to the last author who gets a weight 1. If there are 4 authors, the lead gets 4 times as much weight as the last one.

This proposal has some merit but it does not take account of the possibility that the author list is merely alphabetical which actually was the case in all the Planck publications, for example. Still, it’s less draconian than another suggestion I have heard which is that the first author gets all the credit and the rest get nothing. At the other extreme there’s the suggestion of using normalized citations, i.e. just dividing the citations equally among the authors and giving them a fraction 1/N each. I think I prefer this last one, in fact, as it seems more democratic and also more rational. I don’t have many publications with large numbers of authors so it doesn’t make that much difference to me which you measure happen to pick. I come out as mediocre on all of them.

No suggestion is ever going to be perfect, however, because the attempt to compress all information about the different contributions and roles within a large collaboration into a single number, which clearly can’t be done algorithmically. For example, the way things work in astronomy is that instrument builders – essential to all observational work and all work based on analysing observations – usually get appended onto the author lists even if they play no role in analysing the final data. This is one of the reasons the resulting papers have such long author lists and why the bibliometric issues are so complex in the first place.

Having thousands of authors who didn’t write a single word of the paper seems absurd, but it’s the only way our current system can acknowledge the contributions made by instrumentalists, technical assistants and all the rest. Without doing this, what can such people have on their CV that shows the value of the work they have done?

What is really needed is a system of credits more like that used in the television or film. Writer credits are assigned quite separately from those given to the “director” (of the project, who may or may not have written the final papers), as are those to the people who got the funding together and helped with the logistics (production credits). Sundry smaller but still vital technical roles could also be credited, such as special effects (i.e. simulations) or lighting (photometic calibration). There might even be a best boy. Many theoretical papers would be classified as “shorts” so they would often be written and directed by one person and with no technical credits.

The point I’m trying to make is that we seem to want to use citations to measure everything all at once but often we want different things. If you want to use citations to judge the suitability of an applicant for a position as a research leader you want someone with lots of directorial credits. If you want a good postdoc you want someone with a proven track-record of technical credits. But I don’t think it makes sense to appoint a research leader on the grounds that they reduced the data for umpteen large surveys. Imagine what would happen if you made someone director of a Hollywood blockbuster on the grounds that they had made the crew’s tea for over a hundred other films.

Another question I’d like to raise is one that has been bothering me for some time. When did it happen that everyone participating in an observational programme expected to be an author of a paper? It certainly hasn’t always been like that.

For example, go back about 90 years to one of the most famous astronomical studies of all time, Eddington‘s measurement of the bending of light by the gravitational field of the Sun. The paper that came out from this was this one

A Determination of the Deflection of Light by the Sun’s Gravitational Field, from Observations made at the Total Eclipse of May 29, 1919.

Sir F.W. Dyson, F.R.S, Astronomer Royal, Prof. A.S. Eddington, F.R.S., and Mr C. Davidson.

Philosophical Transactions of the Royal Society of London, Series A., Volume 220, pp. 291-333, 1920.

This particular result didn’t involve a collaboration on the same scale as many of today’s but it did entail two expeditions (one to Sobral, in Brazil, and another to the Island of Principe, off the West African coast). Over a dozen people took part in the planning,  in the preparation of of calibration plates, taking the eclipse measurements themselves, and so on.  And that’s not counting all the people who helped locally in Sobral and Principe.

But notice that the final paper – one of the most important scientific papers of all time – has only 3 authors: Dyson did a great deal of background work getting the funds and organizing the show, but didn’t go on either expedition; Eddington led the Principe expedition and was central to much of the analysis;  Davidson was one of the observers at Sobral. Andrew Crommelin, something of an eclipse expert who played a big part in the Sobral measurements received no credit and neither did Eddington’s main assistant at Principe.

I don’t know if there was a lot of conflict behind the scenes at arriving at this authorship policy but, as far as I know, it was normal policy at the time to do things this way. It’s an interesting socio-historical question why and when it changed.

I’ve rambled off a bit so I’ll return to the point that I was trying to get to, which is that in my view the real problem is not so much the question of authorship but the idea of the paper itself. It seems quite clear to me that the academic journal is an anachronism. Digital technology enables us to communicate ideas far more rapidly than in the past and allows much greater levels of interaction between researchers. I agree with Daniel Shanahan that the future for many fields will be defined not in terms of “papers” which purport to represent “final” research outcomes, but by living documents continuously updated in response to open scrutiny by the community of researchers. I’ve long argued that the modern academic publishing industry is not facilitating but hindering the communication of research. The arXiv has already made academic journals virtually redundant in many of branches of  physics and astronomy; other disciplines will inevitably follow. The age of the academic journal is drawing to a close. Now to rethink the concept of “the paper”…

The real decline of UK research funding..

Posted in Science Politics with tags , , on February 12, 2015 by telescoper

I saw a news item the other day about a report produced by the Royal Society, the British Academy, the Royal Academy of Engineering and the Academy of Sciences calling for a big uplift in research spending. Specifically,

A target for investment in R&D and innovation of 3% of GDP for the UK as a whole – 1% from the government and 2% from industry and charities – in line with the top 10 OECD research investors. The government currently invests 0.5% of GDP; with 1.23% from the private sector.

For reference here is the UK’s overall R&D spending as a fraction of GDP since from 2000 to 2012 as a fraction of GDP:



Some people felt that scientific research funding has done relatively well over the past few years in an environment of deep cuts in government funding in other areas. Iit has been protected against a steep decline in funding by a “ring fence” which has kept spending level in cash terms. Although inflation as measured by the RPI has been relatively low in recent years, the real costs of scientific research have been much faster than these measures. Here is a figure that shows the effective level of funding since the last general election that shows the danger to the UK’s research base:


As a nation we already spend far less than we should on research and development, and this figure makes it plain that we are heading in the wrong direction. It’s not just a question of government funding either. UK businesses invest far too little in developing products and services based on innovations in science and technology. Because of this historic underfunding, UK based research has evolved into a lean and efficient machine but even such a machine needs fuel to make it work and the fuel is clearly running out…

A whole lotta cheatin’ going on? REF stats revisited

Posted in Education, Science Politics with tags , , , on January 28, 2015 by telescoper


Here’s a scathing analysis of Research Excellence Framework. I don’t agree with many of the points raised and will explain why in a subsequent post (if and when I get the time), but I reblogging it here in the hope that it will provoke some comments either here or on the original post (also a wordpress site).

Originally posted on coastsofbohemia:



The rankings produced by Times Higher Education and others on the basis of the UK’s Research Assessment Exercises (RAEs) have always been contentious, but accusations of universities’ gaming submissions and spinning results have been more widespread in REF2014 than any earlier RAE. Laurie Taylor’s jibe in The Poppletonian that “a grand total of 32 vice-chancellors have reportedly boasted in internal emails that their university has become a top 10 UK university based on the recent results of the REF”[1] rings true in a world in which Cardiff University can truthfully[2]claim that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise” from 22nd in RAE2008. Cardiff ranks 5th among universities in the REF2014 “Table of Excellence,” which is based on the GPA of the scores assigned by the REF’s “expert panels” to the three…

View original 2,992 more words

That Was The REF That Was..

Posted in Finance, Science Politics with tags , , , , , , on December 18, 2014 by telescoper

I feel obliged to comment on the results of the 2014 Research Excellence Framework (REF) that were announced today. Actually, I knew about them yesterday but the news was under embargo until one minute past midnight by which time I was tucked up in bed.

The results for the two Units of Assessment relevant to the School of Mathematical and Physical Sciences are available online here for Mathematical Sciences and here for Physics and Astronomy.

To give some background: the overall REF score for a Department is obtained by adding three different components: outputs (quality of research papers); impact (referrring to the impact beyond academia); and environment (which measures such things as grant income, numbers of PhD students and general infrastructure). These are weighted at 65%, 20% and 15% respectively.

Scores are assigned to these categories, e.g. for submitted outputs (usually four per staff member) on a scale of 4* (world-leading), 3* (internationally excellent), 2* (internationally recognised), 1* (nationally recognised) and unclassified and impact on a scale 4* (outstanding), 3* (very considerable), 2* (considerable), 1* (recognised but modest) and unclassified. Impact cases had to be submitted based on the number of staff submitted: two up to 15 staff, three between 15 and 25 and increasing in a like manner with increasing numbers.

The REF will control the allocation of funding in a manner yet to be decided in detail, but it is generally thought that anything scoring 2* or less will attract no funding (so the phrase “internationally recognised” really means “worthless” in the REF, as does “considerable” when applied to impact). It is also thought likely that funding will be heavily weighted towards 4* , perhaps with a ratio of 9:1 between 4* and 3*.

We knew that this REF would be difficult for the School and our fears were born out for both the Department of Mathematics or the Department of Physics and Astronomy because both departments grew considerably (by about 50%) during the course of 2013, largely in response to increased student numbers. New staff can bring outputs from elsewhere, but not impact. The research underpinning the impact has to have been done by staff working in the institution in question. And therein lies the rub for Sussex…

To take the Department of Physics and Astronomy, as an example, last year we increased staff numbers from about 23 to about 38. But the 15 new staff members could not bring any impact with them. Lacking sufficient impact cases to submit more, we were obliged to restrict our submission to fewer than 25. To make matters worse our impact cases were not graded very highly, with only 13.3% of the submission graded 4* and 13.4% graded 3*.

The outputs from Physics & Astronomy at Sussex were very good, with 93% graded 3* or 4*. That’s a higher fraction than Oxford, Cambridge, Imperial College and UCL in fact, and with a Grade Point Average of 3.10. Most other departments also submitted very good outputs – not surprisingly because the UK is actually pretty good at Physics – so the output scores are very highly bunched and a small difference in GPA means a large number of places in the rankings. The impact scores, however, have a much wider dispersion, with the result that despite the relatively small percentage contribution they have a large effect on overall rankings. As a consequence, overall, Sussex Physics & Astronomy slipped down from 14th in the RAE to 34th place in the REF (based on a Grade Point Average). Disappointing to say the least, but we’re not the only fallers. In the 2008 RAE the top-rated physics department was Lancaster; this time round they are 27th.

I now find myself in a situation eerily reminiscent of that I found myself facing in Cardiff after the 2008 Research Assessment Exercise, the forerunner of the REF. Having been through that experience I’m a hardened to disappointments and at least can take heart from Cardiff’s performance this time round. Spirits were very low there after the RAE, but a thorough post-mortem, astute investment in new research areas, and determined preparations for this REF have paid dividends: they have climbed to 6th place this time round. That gives me the chance not only to congratulate my former colleagues there for their excellent result but also to use them as an example for what we at Sussex have to do for next time. An even more remarkable success story is Strathclyde, 34th in the last RAE and now top of the REF table. Congratulations to them too!

Fortunately our strategy is already in hand. The new staff have already started working towards the next REF (widely thought to be likely to happen in 2020) and we are about to start a brand new research activity in experimental physics next year. We will be in a much better position to generate research impact as we diversify our portfolio so that it is not as strongly dominated by “blue skies” research, such as particle physics and astronomy, for which it is much harder to demonstrate economic impact.

I was fully aware of the challenges facing Physics & Astronomy at Sussex when I moved here in February 2013, but with the REF submission made later the same year there was little I could do to alter the situation. Fortunately the University of Sussex management realises that we have to play a long game in Physics and has been very supportive of our continued strategic growth. The result of the 2014 REF result is a setback but it does demonstrate that the stategy we have already embarked upon is the right one.

Roll on 2020!


Get every new post delivered to your Inbox.

Join 4,560 other followers