Archive for citations

The (unofficial) 2021 Journal Impact Factor for the Open Journal of Astrophysics

Posted in Open Access, The Universe and Stuff with tags , , on July 16, 2022 by telescoper

Since a few people have been asking about the Journal Impact Factor (JIF) for the Open Journal of Astrophysics, I thought I’d do a quick post in response.

When asked about this my usual reply is (a) to repeat the arguments why the impact factor is daft and (b) point out that the official JIF is calculated by Clarivate so it’s up to them to calculate it – us plebs don’t get a say.

On the latter point Clarivate takes its bibliometric data from the Web of Science (which it owns). I have applied on behalf of the Open Journal of Astrophysics to be listed in the Web of Science but it has not yet been listed.

Anyway, the fact that it’s out of my hands doesn’t stop people from asking so I thought I’d proceed with my own calculation not using Web of Science but instead using NASA/ADS (which probably underestimates citation numbers but which is freely available, so you can check the numbers using the interface here); the official NASA/ADS abbreviation for the Open Journal of Astrophysics is OJAp.

For those of you who can’t be bothered to look up the definition of an impact factor for a given year it is defined the sum of the citations for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. Since our first full year of publication was 2019, the first year for which we can calculate a JIF is 2021 (i.e. last year) which is defined using data from 2019 and 2020.

I stress again we don’t have an official Journal Impact Factor for the Open Journal of Astrophysics but one can calculate its value easily. In 2019 and 2020 we published 12 and 15 papers respectively, a 27. These papers were cited a total of 193 times in 2021. The journal impact factor for 2021 is therefore … roll on the drums… 193/27, which gives:

If you don’t believe me, you can check the numbers yourself. For comparison, the latest available Impact Factor (2020) for Monthly Notices of the Royal Astronomical Society is 5.29 and Astronomy & Astrophysics is 5.80. OJAp’s first full year of publication was 2019 (in which we published 12 papers) but we did publish one paper in 2018. Based on the 134 citations received to these 13 papers in 2020, our 2020 Journal Impact Factor was 10.31, much higher than MNRAS or A&A.

Furthermore, we published 32 papers in 2020 and 2021 which have so far received 125 citations in 2022. Our Journal Impact Factor for 2022 will therefore be at least 125/32= 3.91 and if those 32 papers are cited at the same for the rest of this year the 2022 JIF will be about 7.5.

Who knows, perhaps these numbers will shame Clarivate into giving us an official figure?

With so much bibliometric information available at the article level there is no reason whatsoever to pay any attention to such a crudely aggregated statistics at the journal level as the JIF. One should judge the contents, not the packaging. I am however fully aware that many people who hold the purse strings for research insist on publications in journals with a high JIF. If there was any fairness in the system they would be mandating astronomy publications in OJAp rather than MNRAS or A&A.

Anyway, it might annoy all the right people if I add a subtitle to the Open Journal of Astrophysics: “The World’s Leading Astrophysics Journal”…

A Citation Landmark

Posted in Open Access, The Universe and Stuff with tags , , on December 31, 2021 by telescoper

Just over a week ago I posted an item about the citations garnered by papers in the Open Journal of Astrophysics in the course of which I speculated on whether we would reach the 1000 mark before the end of 2021. Well, I checked on the NASA/ASD system today and it seems we have just made it:

There is still one paper we have published but not yet listed on ADS so the real number might be a little higher. It’s also possible that the figure will dip below a thousand again, at least for a short time. That is because ADS sometimes counts the citations to a published paper and to its preprint separately thus causing some duplication; when the issue is finally resolved the number of citations can go down.

Anyway, that’s a nice note to end the year on. Tomorrow we start with Volume 5 (2022)!

The Future of Publishing

Posted in Open Access with tags , , on December 22, 2021 by telescoper
Citations to papers in the Open Journal of Astrophysics

I’ve long thought that The Open Journal of Astrophysics is ahead of its time, but when I checked the citation record via NASA/ADS the other day I found corroborating evidence in the form of citations from papers published in 2022! It’s very futuristic to be cited by papers that haven’t been published yet.

I’ve actually noticed this sort of thing before. Some journals announce publications and lodge metadata well in advance of the official publication date so the citations get tracked. At the Open Journal of Astrophysics we usually publish papers within a day or two of acceptance so this doesn’t really happen to papers cited from our articles.

Notice also there are citations going back to 2014. This might surprise you since our first papers were not published until 2016. The reason is that some papers were hanging around on the arXiv accumulating citations before we officially published them.a

That deals with the Ghosts of Citations Past and Citations Yet to Come so I feel I should mention the Present situation. According to ADS, as of today (22nd December 2021), papers in the Open Journal of Astrophysics have garnered 992 citations. That’s an average of just over 20 per paper. We might just get to a thousand before the end of the year. Now that would be a nice Christmas Present!

Citation Metrics and “Judging People’s Careers”

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on October 29, 2021 by telescoper

There’s a paper on the arXiv by John Kormendy entitled Metrics of research impact in astronomy: Predicting later impact from metrics measured 10-15 years after the PhD. The abstract is as follows.

This paper calibrates how metrics derivable from the SAO/NASA Astrophysics Data System can be used to estimate the future impact of astronomy research careers and thereby to inform decisions on resource allocation such as job hires and tenure decisions. Three metrics are used, citations of refereed papers, citations of all publications normalized by the numbers of co-authors, and citations of all first-author papers. Each is individually calibrated as an impact predictor in the book Kormendy (2020), “Metrics of Research Impact in Astronomy” (Publ Astron Soc Pac, San Francisco). How this is done is reviewed in the first half of this paper. Then, I show that averaging results from three metrics produces more accurate predictions. Average prediction machines are constructed for different cohorts of 1990-2007 PhDs and used to postdict 2017 impact from metrics measured 10, 12, and 15 years after the PhD. The time span over which prediction is made ranges from 0 years for 2007 PhDs to 17 years for 1990 PhDs using metrics measured 10 years after the PhD. Calibration is based on perceived 2017 impact as voted by 22 experienced astronomers for 510 faculty members at 17 highly-ranked university astronomy departments world-wide. Prediction machinery reproduces voted impact estimates with an RMS uncertainty of 1/8 of the dynamic range for people in the study sample. The aim of this work is to lend some of the rigor that is normally used in scientific research to the difficult and subjective job of judging people’s careers.

This paper has understandably generated a considerable reaction on social media especially from early career researchers dismayed at how senior astronomers apparently think they should be judged. Presumably “judging people’s careers” means deciding whether or not they should get tenure (or equivalent) although the phrase is not a pleasant one to use.

My own opinion is that while citations and other bibliometric indicators do contain some information, they are extremely difficult to apply in the modern era in which so many high-impact results are generated by large international teams. Note also the extreme selectivity of this exercise: just 22 “experienced astronomers” provide the :calibration” which is for faculty in just 17 “highly-ranked” university astronomy departments. No possibility of any bias there, obviously. Subjectivity doesn’t turn into objectivity just because you make it quantitative.

If you’re interested here are the names of the 22:

Note that the author of the paper is himself on the list. I find that deeply inappropriate.

Anyway, the overall level of statistical gibberish in this paper is such that I am amazed it has been accepted for publication, but then it is in the Proceedings of the National Academy of Sciences, a journal that has form when it comes to dodgy statistics. If I understand correctly, PNAS has a route that allows “senior” authors to publish papers without passing through peer review. That’s the only explanation I can think of for this.

As a rejoinder I’d like to mention this paper by Adler et al. from 12 years ago, which has the following abstract:

This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using “simple and objective” methods is increasingly prevalent today. The “simple and objective” methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.

O brave new world that has such metrics in it.

Update: John Kormendy has now withdrawn the paper; you can see his statement here.

Not the Open Journal of Astrophysics Impact Factor – Update

Posted in Open Access, The Universe and Stuff with tags , , , , on February 11, 2020 by telescoper

 I thought I would give an update with some bibliometric information about the 12 papers published by the Open Journal of Astrophysics in 2019. The NASA/ADS system has been struggling to tally the citations to a couple of our papers but this issue has now been resolved.  According to this source the total number of citations for these papers is 532 (as of today). This number is dominated by one particular paper which has 443 citations according to NASA/ADS. Excluding this paper gives an average number of citations for the remaining 11 of 7.4.

I’ll take this opportunity to re-iterate some comments about the Journal Impact Factor. When asked about this my usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an official impact factor anyway.

For those of you who can’t be bothered to look up the definition of an impact factor , for a given year it is basically the sum of the citations for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. The impact factor for 2019 would be defined using data from 2017 and 2018, etc.

The impact factor is prone to the same issue as the simple average I quoted above in that citation statistics are generally heavily skewed  and the average can therefore be dragged upwards by a small number of papers with lots of citations (in our case just one).

I stress again we don’t have an Impact Factor as such for the Open Journal. However, for reference (but obviously not comparison) the latest actual impact factors (2018, i.e. based on 2016 and 2017 numbers) for some leading astronomy journals are: Monthly Notices of the Royal Astronomical Society 5.23; Astrophysical Journal 5.58; and Astronomy and Astrophysics 6.21.

My main point, though, is that with so much bibliometric information available at the article level there is no reason whatsoever to pay any attention to crudely aggregated statistics at the journal level. Judge the contents, not the packaging.

This post is based on an article at the OJA blog.

 

 

Not the Open Journal of Astrophysics Impact Factor – Update

Posted in Open Access, The Universe and Stuff with tags , , , , on January 20, 2020 by telescoper

Now that we have started a new year, and a new volume of the Open Journal of Astrophysics , I thought I would give an update with some bibliometric information about the 12 papers we published in 2019.

It is still early days for aggregating citations for 2019 but, using a combination of the NASA/ADS system and the Inspire-HEP, I have been able to place a firm lower limit on the total number of citations so far for those papers of 408, giving an average citation rate per paper of 34.

These numbers are dominated by one particular paper which has 327 citations according to Inspire (see above). Excluding this paper gives an average number of citations for the remaining 11 of 7.4.

I’ll take this opportunity to re-iterate some comments about the Journal Impact Factor. When asked about this my usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an official impact factor anyway.

For those of you who can’t be bothered to look up the definition of an impact factor , for a given year it is basically the sum of the citations for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. The impact factor for 2019 would be defined using data from 2017 and 2018, etc.

The impact factor is prone to the same issue as the simple average I quoted above in that citation statistics are generally heavily skewed and the average can therefore be dragged upwards by a small number of papers with lots of citations (in our case just one).

I stress again we don’t have an Impact Factor for the Open Journal. However, for reference (but obviously not direct comparison) the latest actual impact factors (2018, i.e. based on 2016 and 2017 numbers) for some leading astronomy journals are: Monthly Notices of the Royal Astronomical Society 5.23; Astrophysical Journal 5.58; and Astronomy and Astrophysics 6.21.

My main point, though, is that with so much bibliometric information available at the article level there is no reason whatsoever to pay any attention to crudely aggregated statistics at the journal level. Judge the contents, not the packaging.

 

ADS and the Open Journal of Astrophysics

Posted in Open Access with tags , , , , , on January 19, 2020 by telescoper

Most if not all of the authors of papers published in the Open Journal of Astrophysics, along with a majority of astrophysicists in general, use the NASA/SAO Astrophysics Data System (ADS) as an important route to the research literature in their domain, including bibliometric statistics and other information. Indeed this is the most important source of such data for most working astrophysicists. In light of this we have been taking steps to facilitate better interaction between the Open Journal of Astrophysics and the ADS.

First, note that journals indexed by ADS are assigned a short code that makes it easier to retrieve a publication. For reference, the short code for the Open Journal of Astrophysics is OJAp. For example, the 12 papers published by the Open Journal of Astrophysics can be found on ADS here.

If you click the above link you will find that the papers published more recently have not got their citations assigned yet. When we publish a paper at the Open Journal of Astrophysics we assign a DOI and deposit it and related metadata to a system called CrossRef which is accessed by ADS to populate bibliographic fields in its own database. ADS also assigns a unique bibliometric code it generates itself (based on the metadata it obtains from Crossref). This process can take a little while, however, as both Crossref and ADS update using batch processes, the latter usually running only at weekends. This introduces a significant delay in aggregating the citations acquired via different sources.

To complicate things further, papers submitted to the arXiv as preprints are indexed on ADS as preprints and only appear as journal articles when they are published. Among other things, citations from the preprint version are then aggregated on the system with those of the published article, but it can take a while before this process is completed, particularly if an author does not update the journal reference on arXiv.

For a combination of reasons, therefore, the papers we have published in the past have sometimes appeared on ADS out of order. On top of this, of the 12 papers published in 2019, there is one assigned a bibliometric code ending in 13 by ADS and none numbered 6! This is not too much a problem as the ADS identifiers are unique, but the result is not as tidy as it might be.

To further improve our service to the community, we have decided at the Open Journal of Astrophysics that from now on we will speed up this interaction with ADS by depositing information directly at the same time as we lodge it with Crossref. This means that (a) ADS does not have to rely on authors updating the arXiv field and (b) we can give ADS directly information that is not lodged at Crossref.

I hope this clarifies the situation.

Not the Open Journal of Astrophysics Impact Factor

Posted in Open Access with tags , , , on October 22, 2019 by telescoper

Yesterday evening, after I’d finished my day job, I was doing some work on the Open Journal of Astrophysics ahead of a talk I am due to give this afternoon as part of the current Research Week at Maynooth University. The main thing I was doing was checking on citations for the papers we have published so far, to be sure that the Crossref mechanism is working properly and the papers were appearing correctly on, e.g., the NASA/ADS system. There are one or two minor things that need correcting, but it’s basically doing fine.

In the course of all that I remembered that when I’ve been giving talks about the Open Journal project quite a few people have asked me about its Journal Impact Factor. My usual response is (a) to repeat the arguments why the impact factor is daft and (b) point out that we have to have been running continuously for at least two years to have an official impact factor so we don’t really have one.

For those of you who can’t be bothered to look up the definition of an impact factor , for a given year it is basically the sum of the citations in a given year for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. The impact factor for 2019 would be defined using citations to papers publish in 2017 and 2018, etc.

The Open Journal of Astrophysics didn’t publish any papers in 2017 and only one in 2018 so obviously we can’t define an official impact factor for 2019. However, since I was rummaging around with bibliometric data at the time I could work out the average number of citations per paper for the papers we have published so far in 2019. That number is:

I stress again that this is not the Impact Factor for the Open Journal but it is a rough indication of the citation impact of our papers. For reference (but obviously not comparison) the latest actual impact factors (2018, i.e. based on 2016 and 2017 numbers) for some leading astronomy journals are: Monthly Notices of the Royal Astronomical Society 5.23; Astrophysical Journal 5.58; and Astronomy and Astrophysics 6.21.

Citation Analysis of Scientific Categories

Posted in Open Access, Science Politics with tags , on May 18, 2018 by telescoper

I stumbled across an interesting paper the other day with the title Citation Analysis of Scientific Categories. The title isn’t really accurate because not all the 231 categories covered by the analysis are `scientific’: they include many topics in the arts and humanities too. Anyway, the abstract is here:

Databases catalogue the corpus of research literature into scientific categories and report classes of bibliometric data such as the number of citations to articles, the number of authors, journals, funding agencies, institutes, references, etc. The number of articles and citations in a category are gauges of productivity and scientific impact but a quantitative basis to compare researchers between categories is limited. Here, we compile a list of bibliometric indicators for 236 science categories and citation rates of the 500 most cited articles of each category. The number of citations per paper vary by several orders of magnitude and are highest in multidisciplinary sciences, general internal medicine, and biochemistry and lowest in literature, poetry, and dance. A regression model demonstrates that citation rates to the top articles in each category increase with the square root of the number of articles in a category and decrease proportionately with the age of the references: articles in categories that cite recent research are also cited more frequently. The citation rate correlates positively with the number of funding agencies that finance the research. The category h-index correlates with the average number of cites to the top 500 ranked articles of each category (R2 = 0.997). Furthermore, only a few journals publish the top 500 cited articles in each category: four journals publish 60% (σ = ±20%) of these and ten publish 81% (σ = ±15%).

The paper is open access (I think) and you can find the whole thing here.

I had a discussion over lunch today with a couple of colleagues here in Maynooth about using citations. I think we agreed that citation analysis does convey some information about the impact of a person’s research but that information is rather limited. One of the difficulties is that publication rates and citation activity are very discipline-dependent so one can’t easily compare individuals in different areas. The paper here is interesting because it presents an interesting table showing how various statistical citation measures vary across fields and sub-fields;  physics is broken down into a number of distinct areas (e.g. Astronomy & Astrophysics, Particle Physics, Condensed Matter and Nuclear Physics) across which there is considerable variation. How to best to use this information is still not clear..

 

 

Metrics for `Academic Reputation’

Posted in Bad Statistics, Science Politics with tags , , , on April 9, 2018 by telescoper

This weekend I came across a provocative paper on the arXiv with the title Measuring the academic reputation through citation records via PageRank. Here is the abstract:

The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of University Rankings have been proposed to quantify the excellence of different research institutions in the world. Albeit met with criticism in some cases, the relevance of university rankings is being increasingly acknowledged: indeed, rankings are having a major impact on the design of research policies, both at the institutional and governmental level. Yet, the debate on what rankings are  exactly measuring is enduring. Here, we address the issue by measuring a quantitative and reliable proxy of the academic reputation of a given institution and by evaluating its correlation with different university rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Our results allow to quantifying the prestige of a set of institutions in a certain research field based only on hard bibliometric data. Given the volume of the data analysed, our findings are statistically robust and less prone to bias, at odds with ad hoc surveys often employed by ranking bodies in order to attain similar results. Because our findings are found to correlate extremely well with the ARWU Subject rankings, the approach we propose in our paper may open the door to new, Academic Ranking methodologies that go beyond current methods by reconciling the qualitative evaluation of Academic Prestige with its quantitative measurements via publication impact.

(The link to the description of the PageRank algorithm was added by me; I also corrected a few spelling mistakes in the abstract). You can find the full paper here (PDF).

For what it’s worth, I think the paper contains some interesting ideas (e.g. treating citations as a `tree’ rather than a simple `list’) but the authors make some assumptions that I find deeply questionable (e.g. that being cited among a short reference listed is somehow of higher value than in a long list). The danger is that using such information in a metric could form an incentive to further bad behaviour (such as citation cartels).

I have blogged quite a few times about the uses and abuses of citations (see tag here) , and I won’t rehearse these arguments here. I will say, however, that I do agree with the idea of sharing citations among the authors of the paper rather than giving each and every author credit for the total. Many astronomers disagree with this point of view, but surely it is perverse to argue that the 100th author of a paper with 51 citations deserves more credit than the sole author of paper with 49?

Above all, though, the problem with constructing a metric for `Academic Reputation’ is that the concept is so difficult to define in the first place…