The Impact X-Factor

Just time for a quick (yet still rather tardy) post to direct your attention to an excellent polemical piece by Stephen Curry pointing out the pointlessness of Journal Impact Factors. For those of you in blissful ignorance about the statistical aberration that is the JIF, it’s basically a measure of the average number of citations attracted by a paper published in a given journal. The idea is that if you publish a paper in a journal with a large JIF then it’s in among a number of papers that are highly cited and therefore presumably high quality. Using a form of Proof by Association, your paper must therefore be excellent too, hanging around with tall people being a tried-and-tested way of becoming tall.

I won’t repeat all Stephen Curry’s arguments as to why this is bollocks – read the piece for yourself – but one of the most important is that the distribution of citations per paper is extremely skewed, so the average is dragged upwards by a few papers with huge numbers of citations. As a consequence most papers published in a journal with a large JIF attract many fewer citations than the average. Moreover, modern bibliometric databases make it quite easy to extract citation information for individual papers, which is what is relevant if you’re trying to judge the quality impact of a particular piece of work, so why bother with the JIF at all?

I will however copy the summary, which is to the point:

So consider all that we know of impact factors and think on this: if you use impact factors you are statistically illiterate.

  • If you include journal impact factors in the list of publications in your cv, you are statistically illiterate.
  • If you are judging grant or promotion applications and find yourself scanning the applicant’s publications, checking off the impact factors, you are statistically illiterate.
  • If you publish a journal that trumpets its impact factor in adverts or emails, you are statistically illiterate. (If you trumpet that impact factor to three decimal places, there is little hope for you.)
  • If you see someone else using impact factors and make no attempt at correction, you connive at statistical illiteracy.

Statistical illiteracy is by no means as rare among scientists as we’d like to think, but at least I can say that I pay no attention whatsoever to Journal Impact Factors. In fact I don’t think many people in in astronomy or astrophysics use them at all. I’d be interested to hear from anyone who does.

I’d like to add a little coda to Stephen Curry’s argument. I’d say that if you publish a paper in a journal with a large JIF (e.g. Nature) but the paper turns out to attract very few citations then the paper should be penalised in a bibliometric analysis, rather like the handicap system used in horse racing or golf. If, despite the press hype and other tedious trumpetings associated with the publication of a Nature paper, the work still attracts negligible interest then it must really be a stinker and should be rated as such by grant panels, etc. Likewise if you publish a paper in a less impactful journal which nevertheless becomes a citation hit then it should be given extra kudos because it has gained recognition by quality alone.

Of course citation numbers don’t necessarily mean quality. Many excellent papers are slow burners from a bibliometric point of view. However, if a journal markets itself as being a vehicle for papers that are intended to attract large citation counts and a paper published there flops then I think it should attract a black mark. Hoist it on its own petard, as it were.

So I suggest papers be awarded an Impact X-Factor, based on the difference between its citation count and the JIF for the journal. For most papers this will of course be negative, which would serve their authors right for mentioning the Impact Factor in the first place.

PS. I chose the name “X-factor” as in the TV show precisely for its negative connotations.


18 Responses to “The Impact X-Factor”

  1. Ralph Hartley Says:

    Only second item listed is arguably evidence of statistical illiteracy. The first and third are attempts to exploit someone *else’s* statistical illiteracy. That may be ethically questionable, but can be effective. The journal certianly *does* benifit from its trumpeting.

  2. Citation numbers are even less meaningful than impact factors as quality indicator. They are mainly determined by the size of the research field (can’t get more citations than papers published in the field) and the number of co-authors (people are most aware of papers which they wrote or at least authored). At least the impact factor averages over such fluctuations.

    A good journal is one where the editor manages to improve the quality of submitted papers. A poor one doesn’t. Their impact factor may still be the same. Still, will you publish your next major result in an obscure journal? Or will you go for high visibility using astro-ph?

    • telescoper Says:

      What’s the impact factor of astro-ph?

    • The size-of-field issue is at least relevant for the proposed X-factor if you publish in a multi-field journal. My (nuclear physics) colleagues repeatedly dismiss the IF for Journal of Physics G because it is strongly influenced by the particle physics papers (esp the Review of Particle Physics) yet don’t manage to extend their logic to their favourite journals.

      It’d be interesting to see the citation count distribution by field for, say, Nature. I guess I little playing with Web of Knowledge could give me what I want.

  3. Ralph Hartley Says:

    The observed behavior could be perfectly rational even if the Impact Factors were assigned to journals at random.

    If authors prefer to publish in high impact journals, they can be more selective, and the best papers will be published there. If the best papers are published in high impact journals, tenure committees will give them more weight. If tenure committees give them more weight, authors will prefer to publish in them.

    Even if a paper has low impact itself, the impact factor of the journal it is published in is a proxy for the competition it faced. Not a perfect proxy, for sure, but there are no perfect proxies.

    The effect of Impact Factors is at least partially independent of what they actually measure, if anything. If people think (even for invalid reasons) that they are important, then they are.

    It’s a Nash equilibrium, just like the best students wanting to go to high rated colleges, which have high ratings because they get the best students.

  4. […] what’s sparked my post was a response from Peter Coles (@telescoper), called The Impact X-Factor, which proposed an idea I’d had a while back about judging papers against the IF of the journal […]

  5. I used impact factors exactly once in my life, when I was going up for tenure and promotion and I knew my evaluators would be favorably impressed by them (and even would be likely to suspect that I was trying to hide something if I failed to talk about them). I plead innocent to statistical illiteracy. Personally, I don’t think my behavior was “ethically questionable” (in Ralph Hartley’s phrase), but others may disagree. I suppose the highest moral ground would have been to try to educate my evaluators about the folly of paying attention to impact factors, but at that stage in my career I believe I was justified in paying attention to my own self-interest.

    • I have not raised this particular issue with anyone at my university since getting tenure, but I have tried to push the evaluation process towards greater rationality and fairness in other ways. If a natural opening arises (i.e., if i catch people using impact factors in an incorrect or unfair way), I’ll certainly raise the issue.

      (By the way, when I mentioned that I talked about impact factors in order to impress people who were evaluating me, I was thinking primarily of evaluators who are not physicists.)

  6. […] were also interesting commentaries on the post from Telescoper, Bjorn Brembs, DrugMonkey and Tom […]

  7. […] blogger making a real difference to policy! One can only dream.  Incidentally, I did post a little commentary on his post on here too and I’m very glad to see this clarified. Impact Factors are, frankly, […]

  8. […] that he had tapped into a deep well of frustration among academics. (Peter Coles’ related post, The Impact X-Factor, is also very well worth a […]

  9. […] a few people have asked me about its Journal Impact Factor. My usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an […]

  10. […] about the Journal Impact Factor. When asked about this my usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an […]

  11. […] more useful than journal-level descriptors such as the Journal Impact Factor (JIF). I think the Impact Factor is very silly actually. Unfortunately, however, there are some bureaucrats that seem to think that the Journal […]

  12. […] think Journal Impact Factors are a waste of time. Why use journal level metrics when there is plenty of information at the article level? On the […]

  13. […] for the Open Journal of Astrophysics. When asked about this my usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that the official JIF is calculated by Clarivate so it’s up to them to […]

  14. […] although I have grave reservations about the JIF, wanting to make the Open Journal available to as wide a range of authors as possible, I applied […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: