Metrics for `Academic Reputation’

This weekend I came across a provocative paper on the arXiv with the title Measuring the academic reputation through citation records via PageRank. Here is the abstract:

The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of University Rankings have been proposed to quantify the excellence of different research institutions in the world. Albeit met with criticism in some cases, the relevance of university rankings is being increasingly acknowledged: indeed, rankings are having a major impact on the design of research policies, both at the institutional and governmental level. Yet, the debate on what rankings are  exactly measuring is enduring. Here, we address the issue by measuring a quantitative and reliable proxy of the academic reputation of a given institution and by evaluating its correlation with different university rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Our results allow to quantifying the prestige of a set of institutions in a certain research field based only on hard bibliometric data. Given the volume of the data analysed, our findings are statistically robust and less prone to bias, at odds with ad hoc surveys often employed by ranking bodies in order to attain similar results. Because our findings are found to correlate extremely well with the ARWU Subject rankings, the approach we propose in our paper may open the door to new, Academic Ranking methodologies that go beyond current methods by reconciling the qualitative evaluation of Academic Prestige with its quantitative measurements via publication impact.

(The link to the description of the PageRank algorithm was added by me; I also corrected a few spelling mistakes in the abstract). You can find the full paper here (PDF).

For what it’s worth, I think the paper contains some interesting ideas (e.g. treating citations as a `tree’ rather than a simple `list’) but the authors make some assumptions that I find deeply questionable (e.g. that being cited among a short reference listed is somehow of higher value than in a long list). The danger is that using such information in a metric could form an incentive to further bad behaviour (such as citation cartels).

I have blogged quite a few times about the uses and abuses of citations (see tag here) , and I won’t rehearse these arguments here. I will say, however, that I do agree with the idea of sharing citations among the authors of the paper rather than giving each and every author credit for the total. Many astronomers disagree with this point of view, but surely it is perverse to argue that the 100th author of a paper with 51 citations deserves more credit than the sole author of paper with 49?

Above all, though, the problem with constructing a metric for `Academic Reputation’ is that the concept is so difficult to define in the first place…

9 Responses to “Metrics for `Academic Reputation’”

  1. “I will say, however, that I do agree with the idea of sharing citations among the authors of the paper rather than giving each and every author credit for the total.”

    Indeed. It is actually substantially more work to have a single-author paper with 49 citations than a 100-author paper with 51, or even with 151 or perhaps 1051.

    • Note that ADS has the “normalized citation count”, which is presumably the quotient of the “abnormal” count and the number of authors.

      Ideally, a) there would be a percentage next to each name and b) they would add up to 100.

  2. Michel C. Says:

    A higher rank raises the confidence level, which raises the number of citations, which raises the rank, which raises the… We could say the same for low ranking but with the reverse effect. People change all the time… Snobbism!

  3. crisisinphysics Says:

    Why doubt that “being cited among a short reference listed is somehow of higher value than in a long list”? The contrary holds – that being cited in a long list is of lower value – due to padding with post-research references. Long lists are legitimate in review papers, but research papers should concentrate on the few influential sources. How to frame a valuation scheme that embodies such a principle?

    • telescoper Says:

      A short reference list seems to me more likely to a biased selection of cartel members or friends of the author(s).

    • The entire article, on grade (mark) inflation in UK universities, is worth reading. Here is an extract about league tables, echoing complaints Peter has voiced over the years here:

      Why does management impel lecturers to grade up? Because universities need good grades to rank highly in league tables. First released in 1993, league tables are now the prime driver of where students apply. There are three key league tables, including those compiled by the Times and the Guardian. They all result in similar rankings, producing an analogous handful of extremely limited and obscurely calculated data points. What is clear is that grades are critical to a university’s ranking. They affect it both directly, as one of the main measures of performance, and indirectly, through a crucial survey of final-year students. Employment prospects, another major part of the rankings, are affected by grades too. Good honours are now so common that many graduate employers will not take students without at least a 2:1. Grade inflation begets grade inflation.

      The data in the tables is impossible to verify or calculate independently. The Guardian’s, for example, has since 2008 been entirely constructed by one man: a contractor who also works at Kingston University. These league tables are given credibility by the newspapers that publish them, without those papers having either the desire or ability to affirm their legitimacy. They are not overseen by universities or government. There is limited academic research to verify the accuracy or relevance of any of the data in them. And yet, they direct the decisions of hundreds of thousands of students year after year.

      No measure in the tables is without evident failings. No rational student would, when surveyed, traduce the university whose degree they are about to carry, no matter how poor it was; this perversity is widely recognised by recent graduates. The student-staff ratio in the tables bears little relation to publicly available data. Spend per student is a similarly nebulous measure, open to manipulation. And the employment prospects measure in this year’s Guardian table ranks Oxford behind 28 universities, including De Montfort and Bradford. The only measure with any meaning – and it is limited – is entry standards, which isn’t a measure of a university but of its reputation among recent applicants.

      League tables don’t measure what matters. They measure what was initially available and what has since been spuriously created. And they now determine not only the decisions of students, but the fate of every university.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

<span>%d</span> bloggers like this: