Archive for Impact Factor

Not the Open Journal of Astrophysics Impact Factor – Update

Posted in Open Access, The Universe and Stuff with tags , , , , on February 11, 2020 by telescoper

 I thought I would give an update with some bibliometric information about the 12 papers published by the Open Journal of Astrophysics in 2019. The NASA/ADS system has been struggling to tally the citations to a couple of our papers but this issue has now been resolved.  According to this source the total number of citations for these papers is 532 (as of today). This number is dominated by one particular paper which has 443 citations according to NASA/ADS. Excluding this paper gives an average number of citations for the remaining 11 of 7.4.

I’ll take this opportunity to re-iterate some comments about the Journal Impact Factor. When asked about this my usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an official impact factor anyway.

For those of you who can’t be bothered to look up the definition of an impact factor , for a given year it is basically the sum of the citations for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. The impact factor for 2019 would be defined using data from 2017 and 2018, etc.

The impact factor is prone to the same issue as the simple average I quoted above in that citation statistics are generally heavily skewed  and the average can therefore be dragged upwards by a small number of papers with lots of citations (in our case just one).

I stress again we don’t have an Impact Factor as such for the Open Journal. However, for reference (but obviously not comparison) the latest actual impact factors (2018, i.e. based on 2016 and 2017 numbers) for some leading astronomy journals are: Monthly Notices of the Royal Astronomical Society 5.23; Astrophysical Journal 5.58; and Astronomy and Astrophysics 6.21.

My main point, though, is that with so much bibliometric information available at the article level there is no reason whatsoever to pay any attention to crudely aggregated statistics at the journal level. Judge the contents, not the packaging.

This post is based on an article at the OJA blog.

 

 

Not the Open Journal of Astrophysics Impact Factor – Update

Posted in Open Access, The Universe and Stuff with tags , , , , on January 20, 2020 by telescoper

Now that we have started a new year, and a new volume of the Open Journal of Astrophysics , I thought I would give an update with some bibliometric information about the 12 papers we published in 2019.

It is still early days for aggregating citations for 2019 but, using a combination of the NASA/ADS system and the Inspire-HEP, I have been able to place a firm lower limit on the total number of citations so far for those papers of 408, giving an average citation rate per paper of 34.

These numbers are dominated by one particular paper which has 327 citations according to Inspire (see above). Excluding this paper gives an average number of citations for the remaining 11 of 7.4.

I’ll take this opportunity to re-iterate some comments about the Journal Impact Factor. When asked about this my usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an official impact factor anyway.

For those of you who can’t be bothered to look up the definition of an impact factor , for a given year it is basically the sum of the citations for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. The impact factor for 2019 would be defined using data from 2017 and 2018, etc.

The impact factor is prone to the same issue as the simple average I quoted above in that citation statistics are generally heavily skewed and the average can therefore be dragged upwards by a small number of papers with lots of citations (in our case just one).

I stress again we don’t have an Impact Factor for the Open Journal. However, for reference (but obviously not direct comparison) the latest actual impact factors (2018, i.e. based on 2016 and 2017 numbers) for some leading astronomy journals are: Monthly Notices of the Royal Astronomical Society 5.23; Astrophysical Journal 5.58; and Astronomy and Astrophysics 6.21.

My main point, though, is that with so much bibliometric information available at the article level there is no reason whatsoever to pay any attention to crudely aggregated statistics at the journal level. Judge the contents, not the packaging.

 

Not the Open Journal of Astrophysics Impact Factor

Posted in Open Access with tags , , , on October 22, 2019 by telescoper

Yesterday evening, after I’d finished my day job, I was doing some work on the Open Journal of Astrophysics ahead of a talk I am due to give this afternoon as part of the current Research Week at Maynooth University. The main thing I was doing was checking on citations for the papers we have published so far, to be sure that the Crossref mechanism is working properly and the papers were appearing correctly on, e.g., the NASA/ADS system. There are one or two minor things that need correcting, but it’s basically doing fine.

In the course of all that I remembered that when I’ve been giving talks about the Open Journal project quite a few people have asked me about its Journal Impact Factor. My usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an official impact factor so we don’t really have one.

For those of you who can’t be bothered to look up the definition of an impact factor , for a given year it is basically the sum of the citations in a given year for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. The impact factor for 2019 would be defined using citations to papers publish in 2017 and 2018, etc.

The Open Journal of Astrophysics didn’t publish any papers in 2017 and only one in 2018 so obviously we can’t define an official impact factor for 2019. However, since I was rummaging around with bibliometric data at the time I could work out the average number of citations per paper for the papers we have published so far in 2019. That number is:

I stress again that this is not the Impact Factor for the Open Journal but it is a rough indication of the citation impact of our papers. For reference (but obviously not comparison) the latest actual impact factors (2018, i.e. based on 2016 and 2017 numbers) for some leading astronomy journals are: Monthly Notices of the Royal Astronomical Society 5.23; Astrophysical Journal 5.58; and Astronomy and Astrophysics 6.21.

Measuring the lack of impact of journal papers

Posted in Open Access with tags , , , on February 4, 2016 by telescoper

I’ve been involved in a depressing discussion on the Astronomers facebook page, part of which was about the widespread use of Journal Impact factors by appointments panels, grant agencies, promotion committees, and so on. It is argued (by some) that younger researchers should be discouraged from publishing in, e.g., the Open Journal of Astrophysics, because it doesn’t have an impact factor and they would therefore be jeopardising their research career. In fact it takes two years for new journal to acquire an impact factor so if you take this advice seriously nobody should ever publish in any new journal.

For the record, I will state that no promotion committee, grant panel or appointment process I’ve ever been involved in has even mentioned impact factors. However, it appears that some do, despite the fact that they are demonstrably worse than useless at measuring the quality of publications. You can find comprehensive debunking of impact factors and exposure of their flaws all over the internet if you care to look: a good place to start is Stephen Curry’s article here.  I’d make an additional point here, which is that the impact factor uses citation information for the journal as a whole as a sort of proxy measure of the research quality of papers publish in it. But why on Earth should one do this when citation information for each paper is freely available? Why use a proxy when it’s trivial to measure the real thing?

The basic statistical flaw behind impact factors is that they are based on the arithmetic mean number of citations per paper. Since the distribution of citations in all journals is very skewed, this number is dragged upwards by a few papers with extremely large numbers of citations. In fact, most papers published have many few citations than the impact factor of a journal. It’s all very misleading, especially when used as a marketing tool by cynical academic publishers.

Thinking about this on the bus on my way into work this morning I decided to suggest a couple of bibliometric indices that should help put impact factors into context. I urge relevant people to calculate these for their favourite journals:

  • The Dead Paper Fraction (DPF). This is defined to be the fraction of papers published in the journal that receive no citations at all in the census period.  For journals with an impact factor of a few, this is probably a majority of the papers published.
  • The Unreliability of Impact Factor Factor (UIFF). This is defined to be the fraction of papers with fewer citations than the Impact Factor. For many journals this is most of their papers, and the larger this fraction is the more unreliable their Impact Factor is.

Another usefel measure for individual papers is

  • The Corrected Impact Factor. If a paper with a number N of actual citations is published in a journal with impact factor I then the corrected impact factor is C=N-I. For a deeply uninteresting paper published in a flashily hyped journal this will be large and negative, and should be viewed accordingly by relevant panels.

Other suggestions for citation metrics less stupid than the impact factor are welcome through the comments box…

 

The Impact X-Factor

Posted in Bad Statistics, Open Access with tags , , on August 14, 2012 by telescoper

Just time for a quick (yet still rather tardy) post to direct your attention to an excellent polemical piece by Stephen Curry pointing out the pointlessness of Journal Impact Factors. For those of you in blissful ignorance about the statistical aberration that is the JIF, it’s basically a measure of the average number of citations attracted by a paper published in a given journal. The idea is that if you publish a paper in a journal with a large JIF then it’s in among a number of papers that are highly cited and therefore presumably high quality. Using a form of Proof by Association, your paper must therefore be excellent too, hanging around with tall people being a tried-and-tested way of becoming tall.

I won’t repeat all Stephen Curry’s arguments as to why this is bollocks – read the piece for yourself – but one of the most important is that the distribution of citations per paper is extremely skewed, so the average is dragged upwards by a few papers with huge numbers of citations. As a consequence most papers published in a journal with a large JIF attract many fewer citations than the average. Moreover, modern bibliometric databases make it quite easy to extract citation information for individual papers, which is what is relevant if you’re trying to judge the quality impact of a particular piece of work, so why bother with the JIF at all?

I will however copy the summary, which is to the point:

So consider all that we know of impact factors and think on this: if you use impact factors you are statistically illiterate.

  • If you include journal impact factors in the list of publications in your cv, you are statistically illiterate.
  • If you are judging grant or promotion applications and find yourself scanning the applicant’s publications, checking off the impact factors, you are statistically illiterate.
  • If you publish a journal that trumpets its impact factor in adverts or emails, you are statistically illiterate. (If you trumpet that impact factor to three decimal places, there is little hope for you.)
  • If you see someone else using impact factors and make no attempt at correction, you connive at statistical illiteracy.

Statistical illiteracy is by no means as rare among scientists as we’d like to think, but at least I can say that I pay no attention whatsoever to Journal Impact Factors. In fact I don’t think many people in in astronomy or astrophysics use them at all. I’d be interested to hear from anyone who does.

I’d like to add a little coda to Stephen Curry’s argument. I’d say that if you publish a paper in a journal with a large JIF (e.g. Nature) but the paper turns out to attract very few citations then the paper should be penalised in a bibliometric analysis, rather like the handicap system used in horse racing or golf. If, despite the press hype and other tedious trumpetings associated with the publication of a Nature paper, the work still attracts negligible interest then it must really be a stinker and should be rated as such by grant panels, etc. Likewise if you publish a paper in a less impactful journal which nevertheless becomes a citation hit then it should be given extra kudos because it has gained recognition by quality alone.

Of course citation numbers don’t necessarily mean quality. Many excellent papers are slow burners from a bibliometric point of view. However, if a journal markets itself as being a vehicle for papers that are intended to attract large citation counts and a paper published there flops then I think it should attract a black mark. Hoist it on its own petard, as it were.

So I suggest papers be awarded an Impact X-Factor, based on the difference between its citation count and the JIF for the journal. For most papers this will of course be negative, which would serve their authors right for mentioning the Impact Factor in the first place.

PS. I chose the name “X-factor” as in the TV show precisely for its negative connotations.