Archive for League Tables

The Worthless University Rankings

Posted in Bad Statistics, Education with tags , , , on September 23, 2016 by telescoper

The Times Higher World University Rankings, which were released this weekk. The main table can be found here and the methodology used to concoct them here.

Here I wish to reiterate the objection I made last year to the way these tables are manipulated year on year to create an artificial “churn” that renders them unreliable and impossible to interpret in an objective way. In other words, they’re worthless. This year, editor Phil Baty has written an article entitled Standing still is not an option in which he makes a statement that “the overall rankings methodology is the same as last year”. Actually it isn’t. In the page on methodology you will find this:

In 2015-16, we excluded papers with more than 1,000 authors because they were having a disproportionate impact on the citation scores of a small number of universities. This year, we have designed a method for reincorporating these papers. Working with Elsevier, we have developed a new fractional counting approach that ensures that all universities where academics are authors of these papers will receive at least 5 per cent of the value of the paper, and where those that provide the most contributors to the paper receive a proportionately larger contribution.

So the methodology just isn’t “the same as last year”. In fact every year that I’ve seen these rankings there’s been some change in methodology. The change above at least attempts to improve on the absurd decision taken last year to eliminate from the citation count any papers arising from large collaborations. In my view, membership of large world-wide collaborations is in itself an indicator of international research excellence, and such papers should if anything be given greater not lesser weight. But whether you agree with the motivation for the change or not is beside the point.

The real question is how can we be sure that any change in league table position for an institution from year to year are is caused by methodological tweaks rather than changes in “performance”, i.e. not by changes in the metrics but by changes in the way they are combined? Would you trust the outcome of a medical trial in which the response of two groups of patients (e.g. one given medication and the other placebo) were assessed with two different measurement techniques?

There is an obvious and easy way to test for the size of this effect, which is to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators. The Times Higher – along with other purveyors of similar statistical twaddle – refuses to do this. No scientifically literate person would accept the result of this kind of study unless the systematic effects can be shown to be under control. There is a very easy way for the Times Higher to address this question: all they need to do is publish a set of league tables using, say, the 2015/16 methodology and the 2016/17 data, for comparison with those constructed using this year’s methodology on the 2016/17 data. Any differences between these two tables will give a clear indication of the reliability (or otherwise) of the rankings.

I challenged the Times Higher to do this last year, and they refused. You can draw your own conclusions about why.

Rank Nonsense

Posted in Bad Statistics, Education, Politics with tags , , , , , on September 8, 2016 by telescoper

It’s that time of year when international league tables (also known as “World Rankings”)  appear. We’ve already had the QS World University Rankings and the Shanghai (ARWU) World University Rankings. These will soon be joined by the Times Higher World Rankings, due out on 21st September.

A lot of people who should know a lot better give these league tables far too much attention. As far as I’m concerned they are all constructed using extremely suspect methodologies whose main function is to amplify small statistical variations into something that looks significant enough to justify constructing  a narrative about it. The resulting press coverage usually better reflects a preconceived idea in a journalist’s head than any sensible reading of the tables themselves.

A particularly egregious example of this kind of nonsense can be found in this week’s Guardian. The offending article is entitled “UK universities tumble in world rankings amid Brexit concerns”. Now I make no secret of the fact that I voted “Remain” and that I do think BrExit (if it actually happens) will damage UK universities (as well as everything else in the UK). However, linking the changes in the QS rankings to BrExit is evidently ridiculous: all the data were collected before the referendum on 23rd June anyway! In my opinion there are enough good arguments against BrExit without trying to concoct daft ones.

In any case these tables do not come with any estimate of the likely statistical variation from year to year in the metrics used to construct them, which makes changes impossible to interpret. If only the compilers of these tables would put error bars on the results! Interestingly, my former employer, the University of Sussex, has held its place exactly in the QS rankings between 2015 and 2016: it was ranked 187th in the world in both years. However, the actual score corresponding to these two years was 55.6 in 2015 and 48.4 in 2016. Moreover, Cambridge University fell from 3rd to 4th place this year but its score only changed from 98.6 to 97.2. I very much doubt that is significant at all, but it’s mentioned prominently in the subheading of the Guardian piece:

Uncertainty over research funding and immigration rules blamed for decline, as Cambridge slips out of top three for first time.

Actually, looking closer, I find that Cambridge was joint 3rd in 2015 and is 4th this year. Over-interpretation, or what?

To end with, I can’t resist mentioning that the University of Sussex is in the top 150 in the Shanghai Rankings for Natural and Mathematical Sciences this year, having not been in the top 200 last year. This stunning improvement happened while I was Head of School for Mathematical and Physical Sciences so it clearly can not be any kind of statistical fluke but is entirely attributable to excellent leadership. Thank you for your applause.

 

 

The Rising Stars of Sussex Physics

Posted in Bad Statistics, Biographical, Education with tags , , , , on July 28, 2016 by telescoper

This is my penultimate day in the office in the School of Mathematical and Physical Sciences at the University of Sussex, and a bit of news has arrived that seems a nice way to round off my stint as Head of School.

It seems that Physics & Astronomy research at the University of Sussex has been ranked as 13th in western Europe and 7th in the UK by leading academic publishers, Nature Research, and has been profiled as one of its top-25 “rising stars” worldwide.

I was tempted to describe this rise as ‘meteoric’ but in my experience meteors generally fall down rather than rise up.

Anyway, as regular readers of this blog will know, I’m generally very sceptical of the value of league tables and there’s no reason to treat this one as qualitatively any different. Here is an explanation of the (rather curious) methodology from the University of Sussex news item:

The Nature Index 2016 Rising Stars supplement identifies the countries and institutions showing the most significant growth in high-quality research publications, using the Nature Index, which tracks the research of more than 8,000 global institutions – described as “players to watch”.

The top 100 most improved institutions in the index between 2012 and 2015 are ranked by the increase in their contribution to 68 high-quality journals. From this top 100, the supplement profiles 25 rising stars – one of which is Sussex – that are already making their mark, and have the potential to shine in coming decades.

The institutions and countries examined have increased their contribution to a selection of top natural science journals — a metric known as weighted fractional count (WFC) — from 2012 to 2015.

Mainly thanks to a quadrupling of its physical sciences score, Sussex reached 351 in the Global 500 in 2015. That represents an 83.9% rise in its contribution to index papers since 2012 — the biggest jump of any UK research organisation in the top 100 most improved institutions.

It’s certainly a strange choice of metric, as it only involves publications in “high quality” journals, presumably selected by Journal Impact Factor or some other arbitrary statistical abominatio,  then taking the difference in this measure between 2012 and 2015  and expressing the change as a percentage. I noticed one institution in the list has improved by over 4600%, which makes Sussex’s change of 83.9% seem rather insignificant…

But at least this table provides some sort of evidence that the investment made in Physics & Astronomy over the last few years has made a significant (and positive) difference. The number of research faculty in Physics & Astronomy has increased by more than 60%  since 2012 so one would have been surprised not to have seen an increase in publication output over the same period. On the other hand, it seems likely that many of the high-impact papers published since 2012 were written by researchers who arrived well before then because Physics research is often a slow burner. The full impact of the most recent investments has probably not yet been felt. I’m therefore confident that Physics at Sussex has a very exciting future in store as its rising stars look set to rise still further! It’s nice to be going out on a high note!

 

 

An Open Letter to the Times Higher World University Rankers

Posted in Education, The Universe and Stuff with tags , , , , , , , , on October 5, 2015 by telescoper

Dear Rankers,

Having perused your latest set of league tables along with the published methodology, a couple of things puzzle me.

First, I note that you have made significant changes to your methodology for combining metrics this year. How, then, can you justify making statements such as

US continues to lose its grip as institutions in Europe up their game

when it appears that any changes could well be explained not by changes in performance, as gauged by the metrics you use,  but in the way they are combined?

I assume, as intelligent and responsible people, that you did the obvious test for this effect, i.e. to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators.  Your failure to publish such a set, to illustrate how seriously your readers should take statements such as that quoted above, must then simply have been an oversight. Had you deliberately witheld evidence of the unreliability of your conclusions you would have left yourselves open to an accusation of gross dishonesty, which I am sure would be unfair.

Happily, however, there is a very easy way to allay the fears of the global university community that the world rankings are being manipulated: all you need to do is publish a set of league tables using the 2014 methodology and the 2015 data. Any difference between this table and the one you published would then simply be an artefact and the new ranking can be ignored. I’m sure you are as anxious as anyone else to prove that the changes this year are not simply artificially-induced “churn”, and I look forward to seeing the results of this straightforward calculation published in the Times Higher as soon as possible.

Second, I notice that one of the changes to your methodology is explained thus

This year we have removed the very small number of papers (649) with more than 1,000 authors from the citations indicator.

You are presumably aware that this primarily affects papers relating to experimental particle physics, which is mostly conducted through large international collaborations (chiefly, but not exclusively, based at CERN). This change at a stroke renders such fundamental scientific breakthroughs as the discovery of the Higgs Boson completely worthless. This is a strange thing to do because this is exactly the type of research that inspires  prospective students to study physics, as well as being direct measures in themselves of the global standing of a University.

My current institution, the University of Sussex, is heavily involved in experiments at CERN. For example, Dr Iacopo Vivarelli has just been appointed coordinator of all supersymmetry searches using the ATLAS experiment on the Large Hadron Collider. This involvement demonstrates the international standing of our excellent Experimental Particle Physics group, but if evidence of supersymmetry is found at the LHC your methodology will simply ignore it. A similar fate will also befall any experiment that requires large international collaborations: searches for dark matter, dark energy, and gravitational waves to name but three, all exciting and inspiring scientific adventures that you regard as unworthy of any recognition at all but which draw students in large numbers into participating departments.

Your decision to downgrade collaborative research to zero is not only strange but also extremely dangerous, for it tells university managers that participating in world-leading collaborative research will jeopardise their rankings. How can you justify such a deliberate and premeditated attack on collaborative science? Surely it is exactly the sort of thing you should be rewarding? Physics departments not participating in such research are the ones that should be downgraded!

Your answer might be that excluding “superpapers” only damages the rankings of smaller universities because might owe a larger fraction of their total citation count to collaborative work. Well, so what if this is true? It’s not a reason for excluding them. Perhaps small universities are better anyway, especially when they emphasize small group teaching and provide opportunities for students to engage in learning that’s led by cutting-edge research. Or perhaps you have decided otherwise and have changed your methodology to confirm your prejudice…

I look forward to seeing your answers to the above questions through the comments box or elsewhere – though you have ignored my several attempts to raise these questions via social media. I also look forward to seeing you correct your error of omission by demonstrating – by the means described above – what  changes in league table positions are by your design rather than any change in performance. If it turns out that the former is the case, as I think it will, at least your own journal provides you with a platform from which you can apologize to the global academic community for wasting their time.

Yours sincerely,

Telescoper

Sussex and the World Premier League of Physics

Posted in Education, The Universe and Stuff with tags , , , , , on August 16, 2014 by telescoper

In the office again busy finishing off a few things before flying off for another conference (of which more anon).

Anyway, I thought I’d take a short break for a cup of tea and a go on the blog.

Today is the first day of the new Premiership season and , coincidentally, last week saw some good news about the Department of Physics and Astronomy at the University of Sussex in a different kind of league table.

The latest (2014) Academic Rankings of World Universities (often called the “Shanghai Rankings”) are out so, as I suspect many of my colleagues also did, I drilled down to look at the rankings of Physics departments.

Not surprisingly the top six (Berkeley, Princeton, MIT, Harvard, Caltech, & Stanford) are all based in the USA. The top British university is, also not surprisingly, Cambridge in 9th place. That’s the only UK university in the top ten for Physics. The other leading UK physics departments are: Manchester (13th), Imperial (15th), Edinburgh (20th), Durham (28th), Oxford (39th) and UCL (47th). I don’t think there will be any surprise that these all made it into the top 50 departments worldwide.

Just outside the top 50 in joint 51st place in the world is the Department of Physics & Astronomy at the University of Sussex. For a relatively small department in a relatively small university this is a truly outstanding result. It puts the Department  clear in 8th place in the UK, ahead of Birmingham, Bristol, Leicester, Queen Mary, Nottingham, Southampton,  St Andrews, Lancaster, Glasgow, Sheffield and Warwick, all of whom made the top 200 in the world.

Incidentally, two of the other departments tied in 51st place are at Nagoya University in Japan (where I visited in January) and Copenhagen University in Denmark (where I’m going next week).

Although I have deep reservations about the usefulness of league tables, I’m not at all averse to using them as an excuse for a celebration and to help raise the profile of Physics and Astronomy at Sussex generally.  I’d therefore like to take the opportunity to offer hearty congratulations to the wonderful staff of the Department of Physics & Astronomy on their achievement. 

With the recent investments we’ve had and further plans for growth I hope over the next few years we can move even further up the rankings. Unless of course the methodology changes or we’re subect to a “random” (ie downward) fluctuation…

 

 

 

IQ in different academic fields – Interesting? Quite!

Posted in Bad Statistics with tags , , , on May 26, 2013 by telescoper

You all know how much I detest league tables, especially those that are based on entirely arbitrary criteria but nevertheless promote a feeling of smug self-satisfaction for those who lucky enough to find themselves at the top. So when my attention was drawn to a blog post that shows (or purports to show) the variation of average IQ across different academic disciplines I decided to post the corresponding ranking with the usual health warning that IQ tests only measure a subject’s ability to do IQ tests. This isn’t even based on IQ test results per se, but on a conversion between the Graduate Record Examination (GRE) results and IQ which may be questionable. Moreover, the differences are really rather small and (as usual) no estimate of sampling uncertainty is provided.

Does this list mean that physicists are smarter than anyone else? You might say that. I couldn’t possibly comment…

  • 130.0 Physics
  • 129.0 Mathematics
  • 128.5 Computer Science
  • 128.0 Economics
  • 127.5 Chemical engineering
  • 127.0 Material science
  • 126.0 Electrical engineering
  • 125.5 Mechanical engineering
  • 125.0 Philosophy
  • 124.0 Chemistry
  • 123.0 Earth sciences
  • 122.0 Industrial engineering
  • 122.0 Civil engineering
  • 121.5 Biology
  • 120.1 English/literature
  • 120.0 Religion/theology
  • 119.8 Political science
  • 119.7 History
  • 118.0 Art history
  • 117.7 Anthropology/archeology
  • 116.5 Architecture
  • 116.0 Business
  • 115.0 Sociology
  • 114.0 Psychology
  • 114.0 Medicine
  • 112.0 Communication
  • 109.0 Education
  • 106.0 Public administration

Never mind the table, look at the sample size!

Posted in Bad Statistics with tags , , , on April 29, 2013 by telescoper

This morning I was just thinking that it’s been a while since I’ve filed anything in the category marked bad statistics when I glanced at today’s copy of the Times Higher and found something that’s given me an excuse to rectify my lapse. Last week saw the publication of said organ’s new Student Experience Survey which ranks  British Universities in order of the responses given by students to questions about various aspects of the teaching, social life and so  on. I had a go at this table a few years ago, but they still keep trotting it out. Here are the main results, sorted in decreasing order:

University Score Resp.
1 University of East Anglia 84.8 119
2 University of Oxford 84.2 259
3 University of Sheffield 83.9 192
3 University of Cambridge 83.9 245
5 Loughborough University 82.8 102
6 University of Bath 82.7 159
7 University of Leeds 82.5 219
8 University of Dundee 82.4 103
9 York St John University 81.2 88
10 Lancaster University 81.1 100
11 University of Southampton 80.9 191
11 University of Birmingham 80.9 198
11 University of Nottingham 80.9 270
14 Cardiff University 80.8 113
14 Newcastle University 80.8 125
16 Durham University 80.3 188
17 University of Warwick 80.2 205
18 University of St Andrews 79.8 109
18 University of Glasgow 79.8 131
20 Queen’s University Belfast 79.2 101
21 University of Hull 79.1 106
22 University of Winchester 79 106
23 Northumbria University 78.9 100
23 University of Lincoln 78.9 103
23 University of Strathclyde 78.9 107
26 University of Surrey 78.8 102
26 University of Leicester 78.8 105
26 University of Exeter 78.8 130
29 University of Chester 78.7 102
30 Heriot-Watt University 78.6 101
31 Keele University 78.5 102
32 University of Kent 78.4 110
33 University of Reading 78.1 101
33 Bangor University 78.1 101
35 University of Huddersfield 78 104
36 University of Central Lancashire 77.9 121
37 Queen Mary, University of London 77.8 103
37 University of York 77.8 106
39 University of Edinburgh 77.7 170
40 University of Manchester 77.4 252
41 Imperial College London 77.3 148
42 Swansea University 77.1 103
43 Sheffield Hallam University 77 102
43 Teesside University 77 103
45 Brunel University 76.6 110
46 University of Portsmouth 76.4 107
47 University of Gloucestershire 76.3 53
47 Robert Gordon University 76.3 103
47 Aberystwyth University 76.3 104
50 University of Essex 76 103
50 University of Glamorgan 76 108
50 Plymouth University 76 112
53 University of Sunderland 75.9 100
54 Canterbury Christ Church University 75.8 102
55 De Montfort University 75.7 103
56 University of Bradford 75.5 52
56 University of Sussex 75.5 102
58 Nottingham Trent University 75.4 103
59 University of Roehampton 75.1 102
60 University of Ulster 75 101
60 Staffordshire University 75 102
62 Royal Veterinary College 74.8 50
62 Liverpool John Moores University 74.8 102
64 University of Bristol 74.7 137
65 University of Worcester 74.4 101
66 University of Derby 74.2 101
67 University College London 74.1 102
68 University of Aberdeen 73.9 105
69 University of the West of England 73.8 101
69 Coventry University 73.8 102
71 University of Hertfordshire 73.7 105
72 London School of Economics 73.5 51
73 Royal Holloway, University of London 73.4 104
74 University of Stirling 73.3 54
75 King’s College London 73.2 105
76 Bournemouth University 73.1 103
77 Southampton Solent University 72.7 102
78 Goldsmiths, University of London 72.5 52
78 Leeds Metropolitan University 72.5 106
80 Manchester Metropolitan University 72.2 104
81 University of Liverpool 72 104
82 Birmingham City University 71.8 101
83 Anglia Ruskin University 71.7 102
84 Glasgow Caledonian University 71.1 100
84 Kingston University 71.1 102
86 Aston University 71 52
86 University of Brighton 71 106
88 University of Wolverhampton 70.9 103
89 Oxford Brookes University 70.5 106
90 University of Salford 70.2 102
91 University of Cumbria 69.2 51
92 Napier University 68.8 101
93 University of Greenwich 68.5 102
94 University of Westminster 68.1 101
95 University of Bedfordshire 67.9 100
96 University of the Arts London 66 54
97 City University London 65.4 102
97 London Metropolitan University 65.4 103
97 The University of the West of Scotland 65.4 103
100 Middlesex University 65.1 104
101 University of East London 61.7 51
102 London South Bank University 61.2 50
Average scores 75.5 11459
YouthSight is the source of the data that have been used to compile the table of results for the Times Higher Education Student Experience Survey, and it retains the ownership of those data. Each higher education institution’s score has been indexed to give a percentage of the maximum score attainable. For each of the 21 attributes, students were given a seven-point scale and asked how strongly they agreed or disagreed with a number of statements based on their university experience.

My current employer, the University of Sussex, comes out right on the average (75.5)  and is consequently in the middle in this league table. However, let’s look at this in a bit more detail.  The number of students whose responses produced the score of 75.5 was just 102. That’s by no means the smallest sample in the survey, either. The University of Sussex has over 13,000 students. The score in this table is therefore obtained from less than 1% of the relevant student population. How representative can the results be, given that the sample is so incredibly small?

What is conspicuous by its absence from this table is any measure of the “margin-of-error” of the estimated score. What I mean by this is how much the sample score would change for Sussex if a different set of 102 students were involved. Unless every Sussex student scores exactly 75.5 then the score will vary from sample to sample. The smaller the sample, the larger the resulting uncertainty.

Given a survey of this type it should be quite straightforward to calculate the spread of scores from student to student within a sample from a given University in terms of the standard deviation, σ, as well as the mean score. Unfortunately, this survey does not include this information. However, lets suppose for the sake of argument that the standard deviation for Cardiff is quite small, say 10% of the mean value, i.e. 7.55. I imagine that it’s much larger than that, in fact, but this is just meant to be by way of an illustration.

If you have a sample size of  N then the standard error of the mean is going to be roughly (σ⁄√N) which, for Sussex, is about 0.75. Assuming everything has a normal distribution, this would mean that the “true” score for the full population of Sussex students has a 95% chance of being within two standard errors of the mean, i.e. between 74 and 77. This means Sussex could really be as high as 43rd place or as low as 67th, and that’s making very conservative assumptions about how much one student differs from another within each institution.

That example is just for illustration, and the figures may well be wrong, but my main gripe is that I don’t understand how these guys can get away with publishing results like this without listing the margin of error at all. Perhaps its because that would make it obvious how unreliable the rankings are? Whatever the reason we’d never get away with publishing results without errors in a serious scientific journal.

This sampling uncertainty almost certainly accounts for the big changes from year to year in these tables. For instance, the University of Lincoln is 23rd in this year’s table, but last year was way down in 66th place. Has something dramatic happened there to account for this meteoric rise? I doubt it. It’s more likely to be just a sampling fluctuation.

In fact I seriously doubt whether any of the scores in this table is significantly different from the mean score; the range from top to bottom is only 61 to 85 showing a considerable uniformity across all 102 institutions listed. What a statistically literate person should take from this table is that (a) it’s a complete waste of time and (b) wherever you go to University you’ll probably have a good experience!