Subscribe to the OSS Weekly Newsletter!

Register for the Trottier 2024 Symposium!

What a Journal Impact Factor Is and Isn’t

Should you judge the merits of a paper solely based on the impact factor of the journal that published it?

You may have heard of the impact factor. It is a number given to an academic journal (think Nature or The New England Journal of Medicine) which is often erroneously used a proxy for how good the papers it publishes really are. If the journal has a high impact factor, it must mean the research you will find within it is solid, goes the sentiment; if the number is low, be skeptical.

The journal impact factor is calculated by an analytics company called Clarivate. It is meant to represent how often the papers the journal publishes get cited in other papers. Academic papers are full of references to previous research. Thus, if a particular paper gets cited a lot, it probably means that it is important. Journals that publish a lot of papers that get regularly cited must consequently be important journals.

The impact factor for a journal changes every year, and the way it is calculated is simple. For the year 2023, we take how often the papers it published in 2021 and 2022 were cited during the year 2023, and we divide it by how many papers it published in 2021 and 2022. If the result is 11, it means that the papers published in this journal in 2021 and 2022 were each cited 11 times on average in the year 2023. Hence, for the year 2023, this journal’s impact factor would be 11.

The reason for the delay (between 2021-2022 and 2023) is that research takes time and papers will not get massively cited the same year they are published. Also worth mentioning that non-research publications like editorials are not counted here. Some journals are not listed in Clarivate’s database and do not have an impact factor. For those that are, the impact factor spans the gamut between 0 and 254.7 currently, with very few journals achieving a high score.

The impact factor is unfortunately misused quite regularly. It applies to a journal, but it is too often also applied to an individual paper published by the journal or to a scientist who has published in the journal. For example, it can be used to help decide if a researcher should be promoted or not within a university: if they published in high-impact journals, it must mean that they are a great scientist producing top-notch science. But great research may not get cited a lot if it belongs to a niche sub-specialty. Also, solid research should be reproducible, and a journal’s impact factor fails to account for this. And if we want to reward risky research and complex projects that take a decade to complete, the impact factor has nothing to say about them.

Placing too much importance on the impact factor can have a two-way corrosive effect, where the metric itself is gamed by the journals and its status among academics can skew the type of research they choose to pursue. A journal can be rewarded with a high impact factor when a handful of its papers get cited a lot, even when the majority of their publications get little to no citations. It incentivizes the publication of review articles, which get cited a lot, and the refusal to publish papers that the editor believes will be ignored. If you are a researcher, you may find yourself “teaching to the test,” meaning choosing your research projects based on what you think has a high chance of getting highly cited. Curiosity hence takes a backseat to the quest for recognition within a system that is peppered with bad incentives.

When it comes to assessing the worth of a paper, simply inferring that it must be good because it was published in a journal with a high impact factor is a bad idea. To take an extreme example, Andrew Wakefield’s fraudulent 1998 paper linking the measles-mumps-rubella vaccine to autism was published in The Lancet, a prestigious medical journal with a 2022 impact factor of 168.9. (I couldn’t find the impact factor it had in 1998, but it’s fair to say that it was high and that The Lancet was very well regarded at the time, as it is today.) The flip side of this is that good, reproducible research does get published in journals with small impact factors. And very average research can get published in good journals, whose impact factors skyrocket because of a few highly-cited papers. An impact factor is simply not a grade bestowed upon an individual paper.

As part of a comprehensive assessment of the worth of a paper, the journal’s impact factor can play a role. Pseudoscience, for example, rarely gets published in journals with very high impact factors. Sometimes, though, pseudoscientific views, like those of anti-vaccine activists, will make their way into respectable journals as “letters to the editor,” but they are not original research, merely opinion pieces.

Ultimately, how good a paper is can be difficult to evaluate by someone who is not an expert in the field, but a few rules of thumb can help us eliminate many of the worst offenders in the health sciences: the lack of a control group, a small number of participants, and research exclusively done on animals should always make us skeptical that the findings are real and will apply in humans. The fact that a paper was published in Nature may be reassuring, but it’s not a guarantee. Nature, despite an impact factor of 64.8, has published papers it later retracted.

Don’t let the impact factor make too much of an impact on your judgment.


@CrackedScience

Back to top