An Open Letter to the Times Higher World University Rankers

Dear Rankers,

Having perused your latest set of league tables along with the published methodology, a couple of things puzzle me.

First, I note that you have made significant changes to your methodology for combining metrics this year. How, then, can you justify making statements such as

US continues to lose its grip as institutions in Europe up their game

when it appears that any changes could well be explained not by changes in performance, as gauged by the metrics you use,  but in the way they are combined?

I assume, as intelligent and responsible people, that you did the obvious test for this effect, i.e. to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators.  Your failure to publish such a set, to illustrate how seriously your readers should take statements such as that quoted above, must then simply have been an oversight. Had you deliberately witheld evidence of the unreliability of your conclusions you would have left yourselves open to an accusation of gross dishonesty, which I am sure would be unfair.

Happily, however, there is a very easy way to allay the fears of the global university community that the world rankings are being manipulated: all you need to do is publish a set of league tables using the 2014 methodology and the 2015 data. Any difference between this table and the one you published would then simply be an artefact and the new ranking can be ignored. I’m sure you are as anxious as anyone else to prove that the changes this year are not simply artificially-induced “churn”, and I look forward to seeing the results of this straightforward calculation published in the Times Higher as soon as possible.

Second, I notice that one of the changes to your methodology is explained thus

This year we have removed the very small number of papers (649) with more than 1,000 authors from the citations indicator.

You are presumably aware that this primarily affects papers relating to experimental particle physics, which is mostly conducted through large international collaborations (chiefly, but not exclusively, based at CERN). This change at a stroke renders such fundamental scientific breakthroughs as the discovery of the Higgs Boson completely worthless. This is a strange thing to do because this is exactly the type of research that inspires  prospective students to study physics, as well as being direct measures in themselves of the global standing of a University.

My current institution, the University of Sussex, is heavily involved in experiments at CERN. For example, Dr Iacopo Vivarelli has just been appointed coordinator of all supersymmetry searches using the ATLAS experiment on the Large Hadron Collider. This involvement demonstrates the international standing of our excellent Experimental Particle Physics group, but if evidence of supersymmetry is found at the LHC your methodology will simply ignore it. A similar fate will also befall any experiment that requires large international collaborations: searches for dark matter, dark energy, and gravitational waves to name but three, all exciting and inspiring scientific adventures that you regard as unworthy of any recognition at all but which draw students in large numbers into participating departments.

Your decision to downgrade collaborative research to zero is not only strange but also extremely dangerous, for it tells university managers that participating in world-leading collaborative research will jeopardise their rankings. How can you justify such a deliberate and premeditated attack on collaborative science? Surely it is exactly the sort of thing you should be rewarding? Physics departments not participating in such research are the ones that should be downgraded!

Your answer might be that excluding “superpapers” only damages the rankings of smaller universities because might owe a larger fraction of their total citation count to collaborative work. Well, so what if this is true? It’s not a reason for excluding them. Perhaps small universities are better anyway, especially when they emphasize small group teaching and provide opportunities for students to engage in learning that’s led by cutting-edge research. Or perhaps you have decided otherwise and have changed your methodology to confirm your prejudice…

I look forward to seeing your answers to the above questions through the comments box or elsewhere – though you have ignored my several attempts to raise these questions via social media. I also look forward to seeing you correct your error of omission by demonstrating – by the means described above – what  changes in league table positions are by your design rather than any change in performance. If it turns out that the former is the case, as I think it will, at least your own journal provides you with a platform from which you can apologize to the global academic community for wasting their time.

Yours sincerely,

Telescoper

19 Responses to “An Open Letter to the Times Higher World University Rankers”

  1. “Your answer might be that excluding “superpapers” only damages the rankings of small universities. Well, so what? ”

    i’m a little lost – what is the correlation between university size and presence in large HEP experiments which means this bias exists?

    [i’m at a small northern liberal arts college and yet we don’t have large numbers of HEP experimentalists lurking in the shadows]

    • telescoper Says:

      Superpapers are likely to make up a larger proportion of the overall citation count for a small university than for a large one, so excluding them might have a disproportionate effect on the former compared to the latter.

  2. […] “Dear Rankers, Having perused your latest set of league tables along with the published methodology, a couple of things puzzle me. First, I note that you have made significant changes to your methodology for combining metrics this year. How, then can you justify making statements such as ‘US continues to lose its grip as institutions in Europe up their game’ …” (more) […]

  3. Phillip Helbig Says:

    tl;dr: Most league tables are bullshit.

  4. I’ve always found the profound anglocentrism of the rankings very odd. Is it really the case that the Americans, British and other English-speaking peoples are so much better at running universities than anyone else? Or are the rankings inherently biased towards how universities in English-speaking countries are organized?

    • telescoper Says:

      Perhaps other countries can’t see the point!

    • Phillip Helbig Says:

      To some extent, this could be due to the fact that the many of the top places are held by small, expensive, private universities, which don’t exist in many countries, but are very prominent in the USA. Caltech, often #1 on such lists, is very small and, some pundits will argue, not really a university, so the comparison might not even be fair. Countries with mostly or exclusively public universities might have a slightly lower position of their top universities, but the rest won’t be far behind, whereas in the USA, say, the average university is at a much lower level than in most or even all countries in Europe, say.

      In my experience, having been a student at the University of Hamburg, and having worked at the University of Manchester and the University of Groningen (all public universities), colleagues from the States tended to be from the usual suspects: Caltech, Berkeley, Princeton, MIT, Harvard, Chicago, and to a lesser extent Stanford, Cornell, and Yale. I know many people who moved between these universities and public universities in Europe, and vice versa, and all seemed roughly comparable. There were good people and excellent people, of course, but I think there was more variation within a particular university than between universities.

  5. Hi there. I think we address your points here https://www.timeshighereducation.com/policy/rankings (most notably in the “Big science” and “All you need to know” articles). Because of the changes to data collection etc there are no year-on-year institution comparisons in our analysis. Hope this helps.

    • telescoper Says:

      No it doesn’t.

      Not in the slightest.

      If there are “no year-on-year comparisons” in your analysis, why does nearly all of your PR talk about comparisons with last year?

      Dishonest.

      And if you can’t demonstrate what changes are manufactured artifically by your change in methodology, what is the point of the entire exercise?

  6. Benoit Salle Says:

    This is an excellent post full of valid and important points, thank you Telescoper. Is it true the EU is working on a ranking of rankings?
    It’s all nonsense. I wish it wasn’t such a massive business.

    • Phillip Helbig Says:

      Probably anyone who knows enough to judge the rankings properly doesn’t need them. 🙂

      A ranking of rankings doesn’t appear to make much sense, but it might be useful—if one wants to spend time on this at all—to investigate differences between various rankings which appear to be above the noise.

      Of course, in general a ranking isn’t really that interesting at all, to anyone (or at least shouldn’t be) if the difference between the various ranks is small, perhaps even below the noise.

      • Benoit Salle Says:

        You are absolutely right. However I am a student myself and it seems a majority of people do take these rankings very seriously, and pay no attention whatsoever to the methodology.
        Of course the journalists who produce them are to blame, unable to resist to the click-bait temptation.
        But I think it is fair to say that universities themselves are to blame as well : however sad, it seems abundantly clear that the higher you are ranked the less attention you pay to the methodology. And when an institution of very high reputation relays a total piece of trash (such as the ranking being discussed here), it gives it not only a massive audience but also a lot of legitimacy.

      • Phillip Helbig Says:

        Indeed. It should be the other way around, of course. Isaac Asimov once quipped that the higher one’s IQ, the less one thinks that it is a meaningful measure of anything (and vice versa).

  7. Sorry – perhaps I wasn’t clear. I said we made “no year-on-year institution comparisons” (eg individual universities). Sorry about that – should’ve made that a bit clearer.

    At a country level it is interesting to see how things have changed, and we have been really clear throughout the process about the fact that there have also been some changes to how we do things. I
    suspect there are more legs in this discussion (and we may have to agree to disagree!) but I saw your tweets saying you wanted a response so here seemed a good place to do that.

    Hope it’s of some use. We’ve genuinely tried to let everyone know what’s new and why.

    • telescoper Says:

      That’s simply not true. Look at the quote in my blog post. How can you argue that institutions in Europe have upped their game when all that’s happened is you have changed the game?

      It’s utterly disingenuous to tweak the methodology to produce artificial changes, make a big splash about the results in the press, and then – only when pressurized – admit that you engineered the change yourselves.

    • Phillip Helbig Says:

      Even with the clarification, the objection still stands. If you change your algorithm, then when you compare one year to another—it doesn’t matter if individual universities or countries—then it is not clear what changes are due to changes in the algorithm and what changes are due to changes in quality.

      Say you determine the consumer price index by adding up the costs of various items in a virtual shopping cart. Next year, the list of items is different, and you say that prices have gone up. Surely this statement is not very useful unless one knows whether it is due to the different list of items or to an increase in prices or to some combination.

      You need at least one year where you have the same universities and different algorithms.

  8. Reblogged this on Disturbing the Universe and commented:
    Insightful comment on the latest THES university rankings. Their changed methodology tacitly eliminates most experimental particle physics papers for no apparent reason.

    More generally we’re so obsessed with rankings – of universities, schools, grant proposals – that we don’t check consistency of methods or worry about uncertainties in the rankings.

  9. […] I wish to reiterate the object I made last year to the way these tables are manipulated year on year to create an artificial “churn” […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: