Understanding journal impact factors
If you have ever had the task of finding the right journal for your paper, you may have considered the impact factors of several journals. Despite its many flaws, impact factor is still the most common metric for evaluating the prestige of scientific journals. In this post, we’ll take a brief look at impact factor and how it can help or harm your scientific career.
What is the impact factor?
Impact factor is a how many times, on average, an article in a journal has been cited in a particular year. Impact factors are often reported as 2-year or 5-year values. This is the average number of times that articles published in the journal within the last 2 or 5 years have been cited in the current year. As an example, consider the hypothetical Journal of Dream Studies (JDS).At the end of 2016, it has a 2-year impact factor of 5.0. This means that articles published by JDS in 2014 and 2015 were cited in 2016 an average of 5 times. (The citations can be from the same journal for from different journals.)
How is impact factor calculated, and who does it?
The organization responsible for inventing and calculating impact factors is Thomson Reuters, and the publication in which they present the impact factors of various journals is called Journal Citation Reports. Here’s an example of how they would derive the 2-year impact factor for JDS:
So let’s say that in 2016 there were 10,000 times that any JDS article from 2014 or 2015 was cited, and that the journal published 2,000 articles that were deemed “citable” during that time.
Although impact factor seems, on the surface, to be a good idea, you may already have spotted a problem with the formula. It has to do with what are considered “citable” articles. Such articles do not include things like letters to the editor, commentaries, or book reviews. But because “citable” articles are only a factor in the denominator of the formula, this means that any actual citations of these “non-citable” types of articles automatically will inflate the impact factor.
What are the weaknesses of impact factor as a measure of a journal’s quality?
There are several other problems with impact factor. One is confounding variables like the number of professionals working within a particular field (fewer professionals generally mean lower impact factors because there simply aren’t as many authors writing and citing one another)(Bornmann & Daniel, 2008). Another is the demonstrated low correlation between impact factor and the number of citations a specific article receives (Seglen, 1997). In other words, a journal’s high impact factor is no guarantee that the articles within it are of high quality (Bornmann et al., 2012). A third major problem is that because impact factor is based on the average number of citations, a few highly cited papers can skew a journal’s impact factor considerably. Dimitrov and colleagues demonstrate this convincingly in their 2010 commentary in Nature, in which they show that one journal’s impact factor rose from less than 3 to almost 50 in a single year, largely due to one paper’s being cited over 5,600 times. There are other shortcomings associated with the use of impact factors, but they are still the most commonly used metric in the scientific community, so you should know how to use them when you are evaluating journals for your publications.
How should I use impact factor when considering where to publish my work?
Before you think about impact factors, try to narrow your list of possible journals to just a few. Check this blog post for suggestions on how to do that. Once you have a short list of possible journals, you can go directly to the source of impact factors, Journal Citation Reports (https://jcr.incites.thomsonreuters.com), where you can find this information for any journal that Thomson Reuters indexes. However, use of the service requires a login, and I haven’t found the site very user-friendly. What I typically do is simply enter the journal name and “impact factor” in my search engine. This usually takes me either to the page on the journal’s website that shows the impact factor, or to some other source of the information. If the value seems very high or low in comparison to the other journals you are considering, try getting the impact factors for a few consecutive years and comparing them to see whether they are similar.
What is a high or low impact factor?
In general, journals with an impact factor less than 1 or 2 are considered undesirable. This doesn’t necessarily mean that they aren’t good journals; it may simply mean that they are very new or very specialized. But if your goal is to have your paper read and its findings used by as many researchers as possible, it’s a good idea to keep the impact factor in mind. (You should also know that some universities use impact factors as a way to evaluate the quality of the journals in which you publish, and decisions about important issues such as tenure and promotion may be affected by the results of these evaluations.)
Some authors choose open access journals because they want to be sure that other researchers can retrieve and read their article easily, and if the subscription barrier is removed, this may help the article reach more people. However, you should be aware that the impact factors of open access journals are typically much lower than those of subscription-based journals, and although this seems to be changing, the change is very slow.
Do you have questions about impact factors or about journal selection? If so, please leave a comment or use the “Contact Editoracle” button to reach us directly.
Coming up in the next post: Some alternatives to impact factor as a way to judge the quality of a journal.
Bornmann, L., & Daniel, H-D. (2008). What do citation counts measure? A review of studies on citing behavior. J Documentation, 64, 45–80.
Bornmann, L., Marx, W., Gasparyan, A.Y., & Kitas, G.D. (2012). Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatology international, 32(7), 1861–1867.
Dimitrov, J.D., Kaveri, S.V., & Bayry, J. (2010). Metrics: journal’s impact factor skewed by a single paper. Nature, 466, 179.
Seglen, P.O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314, 498–502.