Archive for September 23, 2016

The Worthless University Rankings

Posted in Bad Statistics, Education with tags , , , on September 23, 2016 by telescoper

The Times Higher World University Rankings, which were released this weekk. The main table can be found here and the methodology used to concoct them here.

Here I wish to reiterate the objection I made last year to the way these tables are manipulated year on year to create an artificial “churn” that renders them unreliable and impossible to interpret in an objective way. In other words, they’re worthless. This year, editor Phil Baty has written an article entitled Standing still is not an option in which he makes a statement that “the overall rankings methodology is the same as last year”. Actually it isn’t. In the page on methodology you will find this:

In 2015-16, we excluded papers with more than 1,000 authors because they were having a disproportionate impact on the citation scores of a small number of universities. This year, we have designed a method for reincorporating these papers. Working with Elsevier, we have developed a new fractional counting approach that ensures that all universities where academics are authors of these papers will receive at least 5 per cent of the value of the paper, and where those that provide the most contributors to the paper receive a proportionately larger contribution.

So the methodology just isn’t “the same as last year”. In fact every year that I’ve seen these rankings there’s been some change in methodology. The change above at least attempts to improve on the absurd decision taken last year to eliminate from the citation count any papers arising from large collaborations. In my view, membership of large world-wide collaborations is in itself an indicator of international research excellence, and such papers should if anything be given greater not lesser weight. But whether you agree with the motivation for the change or not is beside the point.

The real question is how can we be sure that any change in league table position for an institution from year to year are is caused by methodological tweaks rather than changes in “performance”, i.e. not by changes in the metrics but by changes in the way they are combined? Would you trust the outcome of a medical trial in which the response of two groups of patients (e.g. one given medication and the other placebo) were assessed with two different measurement techniques?

There is an obvious and easy way to test for the size of this effect, which is to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators. The Times Higher – along with other purveyors of similar statistical twaddle – refuses to do this. No scientifically literate person would accept the result of this kind of study unless the systematic effects can be shown to be under control. There is a very easy way for the Times Higher to address this question: all they need to do is publish a set of league tables using, say, the 2015/16 methodology and the 2016/17 data, for comparison with those constructed using this year’s methodology on the 2016/17 data. Any differences between these two tables will give a clear indication of the reliability (or otherwise) of the rankings.

I challenged the Times Higher to do this last year, and they refused. You can draw your own conclusions about why.

Advertisements