Archive for Journal Impact Factor

Open Journal of Astrophysics Impact Factor Poll

Posted in Open Access with tags , , on February 5, 2021 by telescoper

A few people ask from time to time about whether the Open Journal of Astrophysics has a Journal Impact Factor.

For those of you in the dark about this, the impact factor for Year N, which is usually published in year N+1, is based on the average number of citations obtained in Year N for papers published in Years N-1 and N-2 so it requires two complete years of publishing.

For the OJA, therefore, the first time an official IF can be constructed is for 2021, which would be published is in 2022 and it would be based on the citations gained in 2021 (this year) for papers published in 2019 and 2020. Earlier years were incomplete so no IF can be defined.

It is my personal view that article-level level bibliometric data are far more useful than journal-level descriptors such as the Journal Impact Factor (JIF). I think the Impact Factor is very silly actually. Unfortunately, however, there are some bureaucrats that seem to think that the Journal Impact Factor is important and some of our authors think we should apply to have an official one.
What do you think? If you have an opinion you can vote on the twitter poll here:

I should add that my criticisms of the Journal Impact Factor are not about the Open Journal’s own citation performance. We have every reason to believe our impact factor would be pretty high.

Comments welcome.

Measuring the lack of impact of journal papers

Posted in Open Access with tags , , , on February 4, 2016 by telescoper

I’ve been involved in a depressing discussion on the Astronomers facebook page, part of which was about the widespread use of Journal Impact factors by appointments panels, grant agencies, promotion committees, and so on. It is argued (by some) that younger researchers should be discouraged from publishing in, e.g., the Open Journal of Astrophysics, because it doesn’t have an impact factor and they would therefore be jeopardising their research career. In fact it takes two years for new journal to acquire an impact factor so if you take this advice seriously nobody should ever publish in any new journal.

For the record, I will state that no promotion committee, grant panel or appointment process I’ve ever been involved in has even mentioned impact factors. However, it appears that some do, despite the fact that they are demonstrably worse than useless at measuring the quality of publications. You can find comprehensive debunking of impact factors and exposure of their flaws all over the internet if you care to look: a good place to start is Stephen Curry’s article here.  I’d make an additional point here, which is that the impact factor uses citation information for the journal as a whole as a sort of proxy measure of the research quality of papers publish in it. But why on Earth should one do this when citation information for each paper is freely available? Why use a proxy when it’s trivial to measure the real thing?

The basic statistical flaw behind impact factors is that they are based on the arithmetic mean number of citations per paper. Since the distribution of citations in all journals is very skewed, this number is dragged upwards by a few papers with extremely large numbers of citations. In fact, most papers published have many few citations than the impact factor of a journal. It’s all very misleading, especially when used as a marketing tool by cynical academic publishers.

Thinking about this on the bus on my way into work this morning I decided to suggest a couple of bibliometric indices that should help put impact factors into context. I urge relevant people to calculate these for their favourite journals:

  • The Dead Paper Fraction (DPF). This is defined to be the fraction of papers published in the journal that receive no citations at all in the census period.  For journals with an impact factor of a few, this is probably a majority of the papers published.
  • The Unreliability of Impact Factor Factor (UIFF). This is defined to be the fraction of papers with fewer citations than the Impact Factor. For many journals this is most of their papers, and the larger this fraction is the more unreliable their Impact Factor is.

Another usefel measure for individual papers is

  • The Corrected Impact Factor. If a paper with a number N of actual citations is published in a journal with impact factor I then the corrected impact factor is C=N-I. For a deeply uninteresting paper published in a flashily hyped journal this will be large and negative, and should be viewed accordingly by relevant panels.

Other suggestions for citation metrics less stupid than the impact factor are welcome through the comments box…