Archive for research

Sussex University – the Place for Undergraduate Physics Research!

Posted in Education, The Universe and Stuff with tags , , , , , , , on February 27, 2014 by telescoper

One of the courses we offer in the School of Physics & Astronomy here at the University of Sussex is the integrated Masters in Physics with a Research Placement. Aimed at high-flying students with ambitions to become research physicists, this programme includes a paid research placement as a Junior Research Associate each summer vacation for the duration of the course; that means between Years 1 & 2, Years 2 & 3 and Years 3 & 4 . This course has proved extremely attractive to a large number of very talented students and it exemplifies the way the Department of Physics & Astronomy integrates world-class research with its teaching in a uniquely successful and imaginative way.

Here’s a little video made by the University that features Sophie Williamson, who is currently in her second year (and who also in the class to whom I’m currently teaching a module on Theoretical Physics:

This week we had some very good news about another of our undergraduate researchers, Talitha Bromwich, who is now in the final year of her MPhys degree, and is pictured below with her supervisor Dr Simon Peeters:

Talitha Bromwich with her JRA supervisor Dr Simon Peeters at 'Posters in Parliament' event 25 Feb 14

Talitha spent last summer working on the DEAP3600 dark-matter detector after being selected for the University’s Junior Research Associate scheme. Her project won first prize at the University’s JRA poster exhibition last October, and she was then chosen to present her findings – alongside undergraduate researchers from 22 other universities – in Westminster yesterday as part of the annual Posters in Parliament exhibition, organized under the auspices of the British Conference of Undergraduate Research (BCUR).

A judging panel – consisting of Ben Wallace MP, Conservative MP for Wyre and Preston North; Sean Coughlan, Education Correspondent for the BBC; and Professor Julio Rivera, President of the US Council of Undergraduate Research; and Katherine Harrington of the Higher Education Academy – decided to award Talitha’s project First Prize in this extremely prestigious competition.

Congratulations to Talitha for her prizewinning project! I’m sure her outstanding success will inspire future generations of Sussex undergraduates too!

The Dark Side of the REF

Posted in Finance, Science Politics with tags , , , , , , , , on August 8, 2013 by telescoper

There’s a disturbing story in the latest Times Higher which argues that the University of Leicester has apparently reneged on a promise that non-submission to the forthcoming (2014)  Research Excellence Framework (REF) would not have negative career consequences. They have now said that except in exceptional circumstances, non-submitted academics will either be moved to a teaching-only contract (where there is a vacancy and they can demonstrate teaching excellence), or have their performance “managed”, with the threat of sacking if they don’t meet the specified targets.  I’d heard rumours of this on the grapevine (i.e. Twitter) before the Times Higher story was published. It’s very worrying to have it confirmed, as it raises all kinds of questions about what might happen in departments that turn out to have disappointing REF results .

There are (at least) two possible reasons for non-inclusion of the outputs of a researcher and it is important to distinguish between them. One is that the researcher hasn’t enough high-quality outputs to submit. In the absence of individual extenuating circumstances, researchers are expected to submit four “outputs” (in my discipline that means “research papers”) for assessment. That’s a pretty minimal level of productivity, actually;  such a number per year is a reasonable average for an active researcher in my field.  A person employed on a contract that specifies their duties as Teaching and Research may therefore be under-performing  if they can’t produce four papers over the period 2008-2013. I think some form of performance management  may be justifiable in this case, but the primary aim should be to help the individual rather than show them the door. We all have fallow periods in research, and it’s not appropriate to rush to sack anyone who experiences a lean time.   Andrew Wiles would have been considered `inactive’ had there been a REF in 1992 as he hadn’t published anything for years. Then he produced a proof of Fermat’s Last Theorem. Some things just take time.

A second reason for excluding researcher from the REF is that the institution concerned may be making a tactical submission. As the Times Higher article explains:

The memo suggests that academics would be spared repercussions if, among other reasons, the number of individuals submitted is “constrained” by the volume of case studies their department intends to enter to demonstrate research impact.

Institutions must submit one case study for every 10 scholars entered.

Maria Nedeva, professor of science and innovation dynamics and policy at Manchester Business School, said the tactic of deciding how many academics to submit based on impact case study numbers was “rife”.

(Incidentally, the second paragraph is not quite right. The number of case studies required depends on the number of staff submitted as follows: for fewer than 15 staff , TWO case studies;  for 15-24.99 staff it is THREE case studies – and then for each additional ten members of staff entered a further case study is required.)

e case study for every scholars included plus one, i.e. forThe statement at the end of the quote there is in line with my experience too.  The point is that the REF is not just a means of allocating relatively small amounts of so-called `QR’ research funding . Indeed, it remains entirely possible that no funding at all will be allocated following the 2014 exercise. The thinking then is that the number of staff submitted is largely irrelevant; all that will count is league table position.

This by no means the only example of the dangers that lurk when you take league tables too seriously.

If a department is required to submit, say, four impact cases if all staff are included in the REF submission, but only has three viable ones, it would not be unreasonable to submit fewer staff because their overall would be dragged down by a poor impact case even if the output quality of all staff is high.  There will certainly be highly active researchers in UK institutions, including many who hold sizable external research grants, whose outputs are not submitted to the REF. As the article points out, it would be very wrong for managers to penalize scholars who have been excluded because of this sort of game-playing. That’s certainly not going to happen in the School of Mathematical and Physical Sciences at Sussex University.  Not while I’m Head of School, anyway.

Moreover, even researchers whose “outputs” are not selected may still contribute to the “Environment” and/or “Impact” sections so they still, in a very real sense, do participate in their department’s REF submission.

My opinion? All this silliness could easily have been avoided by requiring all staff in all units of assessment to be submitted by all departments. You know, like would have happened if the system were actually designed to identify and reward research excellence. Instead, it’s yet another example of a bureaucratic machine that’s become entirely self-serving. It exists simply because it exists.  Research would be much better off without it.

Your PhD Questions Answered (?)

Posted in Education, The Universe and Stuff with tags , , , , , , on March 10, 2013 by telescoper

As I mentioned last week, one of the main items on the agenda at the moment is recruitment of new PhD students. As usual, this finds me having to operate on both sides of the fence,  playing a role in selecting students whilst also trying to advise students on how to target their applications, prepare for interview, and choose between offers (for those who manage to get a place).

In my field (astrophysics), the primary route for funding a PhD comes through the Science and Technology Facilities Council (STFC) which operates a national deadline (31st March) before which candidates can not be required to make a decision. This deadline sets the timescale for departments to decide too, as we clearly want to make sure all our first choice applicants get their offers before the cutoff date.

The national deadline prevents students from being pressured into making decisions before they have heard back from all the institutions to which they have applied, so in that sense it’s a good idea. On the other hand, it does mean that there’s often frantic activity on deadline day as offers are accepted or declined. Reserves have to be contacted quickly when a favoured candidate withdraws to go somewhere else and not all of them may still be available. A student who has been waiting anxiously without a first-choice offer may suddenly receive a lifeline on deadline day.

Getting offers is one thing, but deciding between them is quite another. There are many things to take into account, and the criteria are by no means clear. I’m not the only person to have been thinking about this. There are personal matters, of course. Is it a nice place? Are the people friendly? Do you think you can get on with your potential supervisor? That sort of thing. But there’s also the actual research. Is the project really what you want to do? Is is likely to open up a future career in research, or just be a dead end? Is the mixture of theory and experiment (or observation) what you would like?

One of the issues that often arises when I discuss research with potential PhD students is how structured the project  is. Some projects are  mapped out by the supervisor in great detail, with specific things to be done in a specific order with well-defined milestones against which progress can be measured. Others, especially but not exclusively theoretical ones, are much more of the nature of “here’s an interesting idea – let’s study it and see where it leads”. Most PhDs are somewhere between these two extremes, but it’s probably true that experimental PhDs are more like the former, whereas theoretical ones are more like the latter. Mine, in theoretical astrophysics, ended up evolving quite considerably from its starting point.

I’ve always been grateful to my supervisor for allowing me the freedom to follow my own curiosity. But I think it was essential to be given an initial focus, in the form of a specific project to cut my teeth on. Getting a calculation finished, written up and published gave me the confidence to start out on my own, but I did need a lot of guidance during that initial phase. We a;ll need to learn how to walk before we can run.

Another aspect of this is what the final thesis should look like. Should it be a monolithic work, focussed on one very specific topic, or can it be an anthology of contributions across a wider area?  Again, it’s a question of balance. I think that a PhD thesis should be seen as a kind of brochure advertising the skills and knowledge of the student that produced it. Versatility is a good quality, so if you can do lots of different things then your thesis should represent that. On the other hand, you also need to demonstrate the ability to carry out a sustained and coherent piece of research. Someone who flits around knocking out lots of cutesy “ideas papers” may get a reputation for being a bit of a dabbler who is unable or unwilling to tackle problems in depth. The opposite extreme would be a person who is incapable of generating new ideas, but excellent once pointed in a specific direction. The best scientists, in my opinion, have creative imagination as well as technical skill and stamina.  It’s a matter of balance, and some scientists are more balanced than others. There are some (scary) individuals who are brilliant at everything, of course., but us mere mortals have to make the most of our limited potential.

The postdoc market that lies beyond your PhD is extremely tough. To survive you need to maximize the chances of getting a job, and that means being able to demonstrate a suitability for as many opportunities as possible that come up. So if you want to do theory, make sure that you know at least something about observations and data analysis. Even if you prefer analytic work, don’t be too proud to use a computer occasionally. Research problems often require  you to learn new things before you can tackle them. Get into the habit of doing that while you’re a student, and you’re set to continue for the rest of your career. But you have to do all this without spreading yourself too thin, so don’t shy away from the chunky calculations that keep you at your desk for days on end. It’s the hard yards that win you the match.

When it comes to choosing supervisors, my advice would be to look for one who has a reputation for supporting their students, but avoid those who want to exert excessive control. I think it’s a supervisor’s duty to ensure that PhD student becomes as independent as possible as quickly as possible, but to be there with help and advice if things go wrong. Sadly there are some who treat PhD students simply as assistants, and give little thought to their career development.

But if all this sounds a bit scary, I’ll add just one thing. A PhD offers a unique challenge. It’s hard work, but stimulating and highly rewarding. If you find a project that appeals to you, go for it. You won’t regret it.

Emotion and the Scientific Method

Posted in Biographical, Music, The Universe and Stuff with tags , , , , on February 10, 2013 by telescoper

There was an article in today’s Observer in which four scientists from different disciplines talk about how in various ways they all get a bit emotional about their science. The aim appears to correct “the mistaken view that scientists are unemotional people”. It’s quite an interesting piece to read, but I do think the “mistaken view” is very much a straw man. I think most people realize that scientists are humans rather than Vulcans and that as such they have just as many and as complex emotions as other people do. In fact it seems to me that the “mistaken view” may only be as prevalent as it is because so many people keep trying to refute it.

I think anyone who has worked in scientific research will recognize elements of the stories discussed in the Observer piece. On the positive side, cracking a challenging research problem can lead to a wonderful sense of euphoria. Even much smaller technical successes lead to a kind of inner contentment which is most agreeable. On the other hand, failure can lead to frustration and even anger. I’ve certainly shouted in rage at inanimate objects, but have never actually put my first through a monitor but I’ve been close to it when my code wouldn’t do what it’s supposed to. There are times in that sort of state when working relationships get a bit strained too. I don’t think I’ve ever really exploded in front of a close collaborator of mine, but have to admit that one one memorable occasion I completely lost it during a seminar….

So, yes. Scientists are people. They can be emotional. I’ve even known some who are quite frequently also tired. But there’s nothing wrong with that not only in private life but also in their work. In fact, I think it’s vital.

It seems to me that the most important element of scientific research is the part that we understand worst, namely the imaginative part. This encompasses all sorts of amazing things, from the creation of entirely new theories, to the clever design of an experiment, to some neat way of dealing with an unforeseen systematic error. Instances of pure creativity like this are essential to scientific progress, but we understand very little about how the human brain accomplishes them. Accordingly we also find it very difficult to teach creativity to science students.

Most science education focuses on the other, complementary, aspect of research, which is the purely rational part: working out the detailed ramifications of given theoretical ideas, performing measurements, testing and refining the theories, and so on. We call this “scientific method” (although that phrase is open to many interpretations). We concentrate on that aspect because we at have some sort of conception at least of what the scientific method is and how it works in practice. It involves the brain’s rational functions, and promotes the view of a scientist as intellectually detached, analytic, and (perhaps) emotionally cold.

But what we usually call the scientific method would be useless without the creative part. I’m by no means an expert on cognitive science, but I’d be willing to bet that there’s a strong connection between the “emotional” part of the brain’s activities and the existence of this creative spark. We’re used to that idea in the context of art, and I’m sure it’s also there in science.

That brings me to something else I’ve pondered over for a while. Regular readers of this blog will know that I post about music from time to time. I know my musical tastes aren’t everyone’s cup of tea, but bear with me for a moment. Some of the music (e.g. modern Jazz)  I like isn’t exactly easy listening – its technical complexity places a considerable burden on the listener to, well, listen. I’ve had comments on my musical offerings to the effect that it’s music of the head rather than of the heart. Well, I think music isn’t an either/or in this respect. I think the best music offers both intellectual and emotional experiences. Not always in equal degree, of course, but the head and the heart aren’t mutually exclusive. If we didn’t have both we’d have neither art nor science.

In fact we wouldn’t be human.

Pathways to Research

Posted in Education, The Universe and Stuff with tags , , , , , on August 24, 2012 by telescoper

The other day I had a slight disagreement with a colleague of mine about the best advice to give to new PhD students about how to tackle their research. Talking to a few other members of staff about it subsequently has convinced me that there isn’t really a consensus about it and it might therefore be worth a quick post to see what others think.

Basically the issue is whether a new research student should try to get into “hands-on” research as soon as he or she starts, or whether it’s better to spend most of the initial phase in preparation: reading all the literature, learning the techniques required, taking advanced theory courses, and so on. I know that there’s usually a mixture of these two approaches, and it will vary hugely from one discipline to another, and especially between theory and experiment, but the question is which one do you think should dominate early on?

My view of this is coloured by my own experience as a PhD (or rather DPhil student) twenty-five years ago. I went directly from a three-year undergraduate degree to a three-year postgraduate degree. I did a little bit of background reading over the summer before I started graduate studies, but basically went straight into trying to solve a problem my supervisor gave me when I arrived at Sussex to start my DPhil. I had to learn quite a lot of stuff as I went along in order to get on, which I did in a way that wasn’t at all systematic.

Fortunately I did manage to crack the problem I was given, with the consequence that got a publication out quite early during my thesis period. Looking back on it I even think that I was helped by the fact that I was too ignorant to realise how difficult more expert people thought the problem was. I didn’t know enough to be frightened. That’s the drawback with the approach of reading everything about a field before you have a go yourself…

In the case of the problem I had to solve, which was actually more to do with applied probability theory than physics, I managed to find (pretty much by guesswork) a cute mathematical trick that turned out to finesse the difficult parts of the calculation I had to do. I really don’t think I would have had the nerve to try such a trick if I had read all the difficult technical literature on the subject.

So I definitely benefited from the approach of diving headlong straight into the detail, but I’m very aware that it’s difficult to argue from the particular to the general. Clearly research students need to do some groundwork; they have to acquire a toolbox of some sort and know enough about the field to understand what’s worth doing. But what I’m saying is that sometimes you can know too much. All that literature can weigh you down so much that it actually stifles rather than nurtures your ability to do research. But then complete ignorance is no good either. How do you judge the right balance?

I’d be interested in comments on this, especially to what extent it is an issue in fields other than astrophysics.

The Meaning of Research

Posted in Uncategorized with tags , , , , , on March 8, 2012 by telescoper

An interesting email exchange yesterday evening led me to write this post in the hope of generating a bit of crowd sourcing.

The issue at hand concerns the vexed question of the etymology and original meaning of the word “research” (specifically in the context of scholarly enquiry). The point is that the latin prefix re- usually seems to imply repetition whereas the meaning we have for research nowadays is that something new is being sought.

My first thought was to do what I always do in such situations, which is reach for the online edition of the Oxford English Dictionary wherein I found the following:

Etymology: Apparently < re- prefix + search n., after Middle French recerche (rare), Middle French, French recherche thorough investigation (1452; a1704 with spec. reference to investigation into intellectual or academic questions; 1815 in plural denoting scholarly research or the published results of this) … Compare Italian ricerca (1470). Compare slightly later research v.1

Interestingly, my latin dictionary gives a number of words for the verb form of research, such as “investigare”, most of which have recognisable English descendants, but there isn’t a word resembling “research”, or even “search”, so these must have been brought into French from some other source. The prefix re- was presumably added in line with the usual treatment of Latin words brought into French.

Most of the brain cells containing my knowledge of Latin died a long time ago, but I do recall from my school days that the prefix re- does not always mean “again” in that language, and alternative meanings have crept into other languages too. In particular, “re-” is sometimes used simply as an intensifier. I remember “resplendent” is derived from “resplendere” which means to shine (splendere) intensely, not to shine again. Likewise we have replete, which means extremely full, not full again.

This led me to my theory, henceforth named Theory A, that the french “recherche” and the italian “ricerca” originally meant “to search intensely, or with particular thoroughness” as in a scholar poring over documents (presumably including the Bible). Support for this idea can be found here where it says

1570s, “act of searching closely,” from M.Fr. recerche (1530s), from O.Fr. recercher “seek out, search closely,” from re-, intensive prefix, + cercher “to seek for” (see search). Meaning “scientific inquiry” is first attested 1630s…

Being a web source, one can’t attest to its reliability and the dates quoted to differ from the OED, but it shows that at least one other person in the world has the same interpretation as me! However, Iin the interest of balance I should also quote, for example,  this dissenting opinion which is also slightly at odds with the OED:

As per the Merriam-Webster Online Dictionary, the word research is derived from the Middle French “recherche”, which means “to go about seeking”, the term itself being derived from the Old French term “recerchier” a compound word from “re-” + “cerchier”, or “sercher”, meaning ‘search’. The earliest recorded use of the term was in 1577.

My correspondent (and regular commenter on here), Anton, suggested an alternative theory which is based on an idea that can be traced back to Plato. This reminded me of the following explanation of the purpose of scholarship by the Venerable Jorgi in Umberto Eco’s novel The Name of the Rose:

..the preservation of knowledge. Preservation, I say. Not search for… because there is no progress in the history of knowledge … merely a continuous and sublime recapitulation.

Plato indeed argued that true novelty and originality are impossible to achieve. In the Dialogues, Plato has Meno ask Socrates:

“How will you look for it, Socrates, when you do not know at all what it is? How will you aim to search for something you do not know at all? If you should meet with it, how will you know that this is the thing that you did not know? “

And Socrates answers:

“I know what you want to say, Meno … that a man cannot search either for what he knows or for what he does not know. He cannot search for what he knows—since he knows it, there is no need to search—nor for what he does not know, for he does not know what to look for.”

Theory B then is that research has an original meaning derived from this strange (but apparently extremely influential) Platonic idea in which “re-” really does imply repetition.

We scientists think of the scientific method as a means of justifying and validating new ideas, not a method by which new ideas can be generated, but generating new ideas is essential if science can be really said to advance. As one article I read states puts it “We aim for new-search not re-search. It is new-search that advances our understanding of how the world works.”

My research suggests that it’s possible that research doesn’t really mean re-search anyway but I can’t say I have any evidence that convincingly favours Theory A over Theory B. Maybe this is where the blogosphere can help?

I know I have an eclectic bunch of readers so, although it’s unlikely that an expert in 16th Century French is among my subscribers, I wonder if anyone out there can think of any decisive evidence that might resolve this etymological conundrum? If so, please let me have your contributions through the comments box.

In the meantime let’s subject this to a poll…

Science Publishing: What is to be done?

Posted in Science Politics with tags , , , on September 10, 2011 by telescoper

The argument about academic publishing has been bubbling away nicely in the mainstream media and elsewhere in the blogosphere; see my recent post for links to some of the discussion elsewhere.

I’m not going to pretend that there’s a consensus amongst all scientists about this, but everything I’ve read has confirmed my rather hardline view, which is that in my field, astrophysics, academic journals are both unnecessary and unhealthy. I can certainly accept that in days gone by, perhaps up to around 1990, scientific journals provided the only means of disseminating research to the wider world. With the rise of the internet, that is no longer the case. Year after year we have been told that digital technologies would make scientific publishing cheaper. That has not happened. Journal subscriptions have risen faster than inflation for over a decade. Why is this happening? The answer is that we’re being ripped off. What began by providing a useful service has now become simply a parasite and, like most parasites, it is endangering the health of its subject.

The scale of the racket is revealed in an article I came across in Research Fortnight. Before I give you the figures let me explain that the UK Higher Education funding councils, such as HEFCE in England and HEFCW in Wales, award funding in a manner determined by the the quality of research going on in each department as judged by various research assessment exercises; this funding is called QR funding. Now listen to this. It is estimated that around 10 per cent of all QR funding in the UK goes into journal subscriptions. There is little enough money in science research these days for us to be paying a tithe of such proportions. This has to stop.

You might ask why such an obviously unsustainable situation carries on. I think there are two answers to this. One is the rise of the machinery of research assessment, which plays into the hands of the publishing industry. For submitted work to count in the Research Assessment Exercise (or its new incarnation, the Research Excellence Framework) it must be published in a refereed journal. Scientists who want to break the mould by publishing their papers some other way will be stamped on by our lords and masters who hold the purse strings. The whole system is invidious.

The second answer is even more discomforting. It is that many scientists actually like the current system. Each paper in a “prestigious” journal is another feather in your cap, another source of pride. It doesn’t matter if nobody reads any of them, ones published output is a measure of status. For far too many researchers gathering esteem by publishing in academic journals has become an end in itself. The system corrupts and has become corrupted. You can find similar comments in a piece in last week’s Guardian.

So what can be done? Well, I think that physics and astronomy can show the way forward. There is already a rudimentary yet highly effective prototype in place, called the arXiv. In many fields, including astronomy, all new papers are put on the arXiv, and these can be downloaded by anyone for free. Particle physics led the way towards the World Wide Web, an invention that has revolutionised so many things. It’s no coincidence that physicists are also ahead of the game on academic publishing too.

Of course it takes money to run the arXiv and that money is at the moment paid by contributions from universities that use it extensively. You might then argue that means the arXiv is just another journal, just one where the subscription cost is less obvious.

Perhaps that’s true, but then just take a look at the figures. The total running costs of the arXiv amount to just $400,000 per annum. That’s not just for astronomy but for a whole range of other branches of physics too, and not only new papers but a back catalogue going back at least 15 years.

There are about 40 UK universities doing physics research. If UK Physics had to sustain the costs of the arXiv on its own the cost would be an average of just $10,000 per department per annum. Spread the cost around the rest of the world, especially the USA, and the cost would be peanuts. Even $10,000 is less than most single physics journal subscriptions; indeed it’s not even 10 per cent of my departments annual budget for physics journals!

Whenever I’ve mentioned the arXiv to publishers they’ve generally dismissed it, arguing that it doesn’t have a “sustainable business plan”. Maybe not. But it is not the job of scientific researchers to support pointless commercial enterprises. We do the research. We write the papers. We assess their quality. Now we can publish them ourselves. Our research is funded by the taxpayer, so it should not be used to line the pockets of third parties.

I’m not saying the arXiv is perfect but, unlike traditional journals, it is, in my field anyway, indispensable. A little more investment, adding a comment facilities or a rating system along the lines of, e.g. reddit, and it would be better than anything we get academic publishers at a fraction of the cost. Reddit, in case you don’t know the site, allows readers to vote articles up or down according to their reaction to it. Restrict voting to registered users only and you have the core of a peer review system that involves en entire community rather than relying on the whim of one or two referees. Citations provide another measure in the longer term. Nowadays astronomical papers attract citations on the arXiv even before they appear in journals, but it still takes time for new research to incorporate older ideas.

Apparently, Research Libraries UK, a network of libraries of the Russell Group universities and national libraries, has already warned journal publishers Wiley and Elsevier that they will not renew subscriptions at current prices. If it were up to me I wouldn’t bother with a warning…

 

Stellar Research?

Posted in Education, Science Politics with tags , , , , on August 24, 2011 by telescoper

I heard today that  Chief Scientific Advisor to the Welsh Government, John Harries, has called for Welsh universities to be more “predatory” in attracting “star researchers” to Wales. At first sight I thought that sounded like good news for astronomy, but reading the article more closely I realise that’s not what he meant!

The point is that, according to the BBC article,  Welsh universities currently only attract about 3% of the UK’s research funding whereas the famous Barnett formula allocates Wales about 5% of the total in other areas of expenditure.  Nobody involved in research  would argue for funds to be allocated on any other basis than through quality, so there’s no clamour for having research funding allocated formulaically a là Barnett; the only way to improve the success rate is to improve the quality of applications. John Harries suggests that means poaching groups from elsewhere who’ve already got a big portfolio of research grants…

The problem with that strategy is that it’s not very easy to persuade such people to leave their current institutions, especially if they’ve already spend years acquiring the funding needed to equip their laboratories. It’s not just a question of moving people, which is relatively easy, but can involve trying to replace lots of expensive and delicate equipment. The  financial inducements needed to fund the relocation of a major research group and fight off counter-offers from its present host are likely to be so expensive that the benefit gained from doing this takes years to accrue, even they are successful.

I agree with Prof. Harries that Welsh universities need to raise their game in research, but I don’t think this “transfer market” approach is likely to provide a solution on its own. I think Wales needs a radical restructuring of research, especially in science, across the whole sector, which I think is unacceptably complacent about the challenges ahead.

For a start, much more needs to be done to identify and nurture  younger researchers, i.e. future research stars  rather than present ones.  Most football clubs nowadays have an “academy” dedicated to the development of promising youngsters, so why can’t we do a similar thing for research? Research groups in different Welsh universities also need to develop closer collaborations, and perhaps even full mergers, in order to compete with larger English institutions.

More controversially I’d say that the problem is not being helped by Welsh universities continuing to be burdened by the monstrous bureaucracy and bizarre practices of the Research Excellent Framework, which allocates “QR” research funds according to priorities set by HEFCE in a way that reflects the thinking of the Westminster parliament. The distribution of QR funding in Wales, which is meant to supplement competitive grant income from UK  funding bodies, should be decided by HEFCW in line with Welsh strategic priorities. Wales would be far better off withdrawing from the REF and doing its own thing under the auspices of the Welsh Assembly Government.

What I’m saying is that I’ve got nothing against Welsh universities trying to entice prominent research leaders here;  we’ve recently tried (unsuccessfully) to do it here in the School of Physics & Astronomy at Cardiff University, in fact. But in the current funding climate it’s not easy to persuade their current institutions to let them go. In any case,  I don’t think parachuting in a few high-profile individuals will in itself solve the deep-rooted problems of the Welsh university system. A longer term strategy needs to be found.

Scotland already punches above its weight in terms of research income for its universities and there’s no reason why, in the long run, Wales can’t do likewise.

(Guest Post) Physics and Binary Creep

Posted in Education, Finance, Science Politics with tags , , , , , , on April 15, 2011 by telescoper

His Excel-lence (geddit?) Paul Crowther has been at it again, using his favourite packages sophisticated graph-plotting facilities to produce the interesting figures that go with another guest post….

–0–

Last week’s Times Higher Ed included a news item headlined ‘binary creep’, in which HEFCE were considering restricting support for PhD research students to universities of the highest research quality. Concerns were expressed in the article about a two stream future for universities – research intensives in the fast lane and ‘the rest’ in the slow lane. This reminded me of a recent Times Higher Ed interview with the former Commons’ Science and Technology Committee chairman, Lord (Phil) Willis. Lord Willis argued that the UK could probably sustain “no more than 30″ universities with the capacity to attract the best global researchers and carry out world-class research, a view no doubt shared by ministers and civil servants within BIS. I should qualify the following line of thought by emphasising that this is not Government policy, although both stories reflect moves by funding agencies to further concentrate increasingly scarce resources on the highest ranked research universities. For example, in England HEFCE is expected to withdraw all quality-related (QR) support from 2* RAE research from 2012 onwards.

Mindful of the fact that in such a vision for the future, there would be a comparatively few, research intensive universities (`winners’) where would that leave the remainder (‘losers’), especially for physics? Research quality can be quantified in all manner of ways, but for simplicity I have adopted the Quality Index (QI) from Research Fortnight which provides a single mark out of 100 based on RAE quality profiles (4*:3*:2*:1* weighted 8:4:2:1). The chart below shows the  QI-ranked list of more-or-less all 120 UK universities who were rated in RAE 2008. It will come as no surprise to anyone that Oxbridge, LSE and Imperial top the rankings, closely followed by UCL and a few other high flyers, but beyond the top 10 perhaps more surprising there are no natural breaks in quality from Durham and QMUL in joint 11th place, to Bolton at 107th.

Thinking out loud about Willis’ assertion that the UK should not be spreading the jam more thinly than, say, the leading 30 universities, there would obviously be individual physics departments currently outside the top 30 which are ranked significantly higher than those within the top 30. To illustrate this, the chart also includes (in blue) physics QI scores for all teaching institutions that were assessed under the UOA 19 in RAE 2008. To blindly follow Lord Willis’ suggestion, 16 out of 42 institutions involved with physics research – comprising 37 per cent of all academic staff – would be clear losers. These would include one physics department raked within the top 10 (scoring 49) because its host institution is ranked 34th overall, while winners would include a department scoring 31, i.e. ranked 40th (out of 42) for physics, as a result of its university squeezing into the top 30. Chemistry – within the same RAE sub-panel as physics – reveals a broadly similar distribution, although there is perhaps a greater concentration of the highest research quality in the overall top 20, as the chart below illustrates.

Alternatively, if there is to be further concentration, one could argue that research funding should focus on, say, the top 20 physics departments regardless of the performance of their host institution. Indeed, already 80 percent of STFC spending goes to only 16 universities. Still, as RAE grades indicate, a strength of UK physics is the breadth of high quality research, with no natural break points until beyond 30th place in the rankings, as the final chart shows. Of course, RAE scores aren’t the sole criterion being discussed, with “critical mass” the other main driver. Due in large part to the big four, 70 per cent of physics academic staff submitted for RAE 2008 are in departments that are currently ranked in the top 20. Chemistry has a similar story to tell in the chart, albeit displaying a somewhat steeper QI gradient.

What might be the long-term consequences of a divergence between a small number of “research-facing” universities and the rest? It is apparent that if the number of physics departments involved in research were reduced by a third, some high quality research groups would be lost, regardless of precisely where the cleaver ultimately fell. Let’s too not forget that astrophysics represents the largest sub-field of physics from the last IOP survey, as measured in numbers of academics.

If policy makers don’t see anything fundamentally wrong with A-level physics being taught by teachers qualified, say, in biology, then they might too wonder whether physics degrees could be taught by academics lacking a physics research background? This might work for first year undergraduate courses, but thereafter isn’t more specialist knowledge needed that a research background most readily provides? How would the third of physics academics outside the top 30 universities react to the prospects of a teaching-only future? Many surely would consider jumping ship either to one of the chosen few or overseas, further decreasing the pool of those with research experience in the remaining physics departments. This is further complicated by the expected political desire that physics departments should be appropriately distributed geographically across England, Scotland, Wales and Northern Ireland.

As a final thought experiment, the fate of physics departments facing the prospect of a teaching-only future might also be binary in nature, either (a) whither and die, decreasing the range of institutions offering degrees in physics (or physical sciences, natural sciences etc.); perversely at a time when the Government are anxious to maintain the number of students studying Science, Technology, Engineering and Mathematics (STEM) subjects, or (b) thriving – free from the distractions of chasing dwinding research grants – by adapting to offer shorter duration physics degrees, described as “cheap and cheerful” by Dr David Starkey during the discussion on student fees on last Thursday’s Newsnight. To reiterate, it is not explicit Government policy to actively reduce the number of physics departments that receive research allocations, but this seems to be the general “direction of travel” in policy-makers speak, so I fear a rocky path ahead..


Share/Bookmark

A Modest Proposal

Posted in Education, Science Politics with tags , , , , on March 7, 2011 by telescoper

Last week I posted a short item about the looming Kafka-esque nightmare that is the Research Excellence Framework. A few people commented to me in private that although they hate the REF and accept that it’s ridiculously expensive and time-consuming, they didn’t see any alternative. I’ve been thinking about it and thought I’d make a suggestion. Feel free to shoot it down in flames through the box at the end, but I’ll begin with a short introduction.

Those of you old enough to remember will know that before 1992 (when the old `polytechnics’ were given the go-ahead to call themselves `universities’) the University Funding Council – the forerunner of HEFCE – allocated research funding to universities by a simple formula related to the number of undergraduate students. When the number of universities suddenly increased this was no longer sustainable, so the funding agency began a series of Research Assessment Exercises to assign research funds (now called QR funding) based on the outcome. This prevented research money going to departments that weren’t active in research, most (but not all) of which were in the ex-Polys. Over the years the apparatus of research assessment has become larger, more burdensome, and incomprehensibly obsessed with “research concentration”. Like most bureaucracies it has lost sight of its original purpose and has now become something that exists purely for its own sake.

It’s especially indefensible at this time of deep cuts to university budgets that we are being forced to waste an increasingly large fraction of our decreasing budgets on staff-time that accomplishes nothing useful except pandering to the bean counters.

My proposal is to abandon the latest manifestation of research assessment mania, i.e. the REF, and return to a simple formula, much like the pre-1992 system,  except that QR funding should be based on research student rather than undergraduate numbers.

There’s an obvious risk of game-playing, and this idea would only stand a chance of working at all if the formula involved the number of successfully completed research degrees over a given period .

I can also see an argument  that four-year undergraduate students (e.g. MPhys or MSci students) also be included in the formula, as most of these involve a project that requires a strong research environment.

Among the advantages of this scheme are that it’s simple, easy to administer, would not spread QR funding in non-research departments, and would not waste hundreds of millions of pounds on bureaucracy that would be better spent on research. It would also maintain the current “dual support” system for research.

I’m sure you’ll point out disadvantages through the comments box!


Share/Bookmark

Follow

Get every new post delivered to your Inbox.

Join 3,269 other followers