The Great Curve II: Citation distributions and reverse engineering the JIF
Here’s a lengthy study of Journal Impact Factors. It’s mainly about cell biology journals but I think this across all scientific disciplines. The JIF is so flawed as to be meaningless but this discussion suggests that the situation is even worse than that, with some advertised JIFs being wrong…
There have been calls for journals to publish the distribution of citations to the papers they publish (123). The idea is to turn the focus away from just one number – the Journal Impact Factor (JIF) – and to look at all the data. Some journals have responded by publishing the data that underlie the JIF (EMBO J, Peer J, Royal Soc, Nature Chem). It would be great if more journals did this. Recently, Stuart Cantrill from Nature Chemistry actually went one step further and compared the distribution of cites at his journal with other chemistry journals. I really liked this post and it made me think that I should just go ahead and harvest the data for cell biology journals and post it.
This post is in two parts. First, I’ll show the data for 22 journals. They’re broadly cell biology, but there’s something…
View original post 1,883 more words
January 5, 2016 at 7:12 pm
Even if one judges quality by the number of citations—highly questionable in itself, and not far from judging the quality of music by the position in the charts—then why not just concentrate on the number of citations the paper has, rather than the average for the journal in which it appeared?
IIRC, the typical paper in a high-impact journal actually has fewer citations than the typical paper in a normal journal, because in the former case the high impact factor is the result of a few really highly cited papers skewing the distribution.
January 5, 2016 at 9:47 pm
Thanks for this link, Peter. I think it’s something well known but it’s lovely to see the distributions laid out explicitly.