If not impact factor, then what?
In the last blog post, I talked about impact factor (IF) and what it might or might not tell you about a journal’s reputation and quality. If you read that post, you have a basic understanding of the shortcomings of using IF as your sole source of information when deciding how well respected the journal is. So in this post, I want to discuss some ways you can evaluate the quality of a journal to which you’re considering entrusting your work.
As mentioned in the previous post about IF, this metric has four major limitations: It allows citations of things like letters to the editor to inflate the factor; it does not take into account the number of professionals working in the particular discipline served by the journal (so journals in specialized areas are likely to have lower IFs); there is a low correlation between IF and the number of citations a specific paper receives; and a single paper that garners a high number of citations can inflate the journal’s IF.
Alberts (2013) correctly noted that IF is not only a deeply flawed measure of journal quality, but these flaws also create ripple effects into the careers of scientists when the metric is used to judge the quality of their work. And the metric is more influential in academic success than you might think.
van Dijk et al. (2014) developed a model to predict the likelihood of researchers’ becoming principle investigators (PIs). Analyzing data from over 25,000 individual researchers, the authors found that the top three predictors of attainment of PI status are: the number of publications the researcher has, the IF of the journals that published the work, and the number of papers that received more citations than average for the journal. (Interestingly, first-authored papers were much more important than second- or later-authored ones, which suggests that as a first author, the decision about where to submit your paper is much more important than if you are not the first author.)
According to Alberts, at least 75 scientific organizations, including the American Association for the Advancement of Science, have joined an effort to discontinue use of IF as a measure of individual scientists’ work. Hicks and colleagues also published their so-called Leiden Manifesto (Hicks et al., 2015), which outlined 10 specific measures to achieve fair, objective, and transparent evaluations of scientists’ professional work.
IF is flawed, but is there anything better?
Hopefully, if you didn’t realize the inadequacy of IF as a sole metric of journal quality or of your worth as a scientist, you do now. But aren’t all journal metrics flawed in some way? If we trade IF for some other method of evaluating journal quality, aren’t we just trading one set of problems for another?
Yes, and no. The fact is that using any single formula to determine quality will always produce less than optimal results. However, if multiple methods (flawed though they might be) converge on a similar conclusion, you can assume with reasonable confidence that the information is reliable. So you need not necessarily toss out IF as an indicator of quality. Instead, ask yourself a few simple questions to see how the answers stack up with the IF for a given journal:
- Does it publish articles that you consider high in quality? As comedian Groucho Marx famously said, “I don’t care to belong to any club that will have me as a member.” If you doubt your own ability to judge the value of others’ work, you will be in a poor position to judge the value of your own. (If this is the case, you’ll need to seek help from a more experienced colleague for evaluating the quality of journals you’re considering. Conversely, if you’re certain that your work is of high quality, you should also be able to make this judgment about the work of others in your field. You wouldn’t want your hard work to be represented in a journal that also publishes terrible papers. If you’re considering a journal that you’re not familiar with, browse a few recent issues first. Do the research designs appear to be sound? Do the papers cite relevant, highly respected sources? Are the papers written in proper English? Would you feel proud or embarrassed to see your own work here?
- Is there a real peer review process? Check the journal’s guidelines for authors. These will usually describe the submission process, including peer review (if it exists). It’s a good idea to check the journal’s acceptance rate as well. Some journals claim to have a peer review process, but if the acceptance rate is very high (e.g., more than 60% or 70%), the process may be more rubber-stamping than substantive review. Many journals publish their acceptance rates on their websites; for others, you may access this information in external sources such as Cabell’s Directory of Publishing Opportunities. Failing these two strategies, you can always contact the journal’s editorial office directly and ask for the information.
- Is it cited by others in my topic area? You can use Google Scholar to search for the journal name and a keyword in your topic area. Use the search tools to specify a fairly recent range (e.g., the past 5 years). Google Scholar tells you how many times each article you find has been cited. This gives you a basic idea, not only of how well cited the journal is, but whether citations are occurring for papers in your topic area.
Putting It All Together
Let’s look at an example that incorporates IF and these three questions. Say you’ve written a paper about early onset dementia, and you’re considering publishing it in the Journal of Clinical Psychiatry. A Google search tells you that the most recent impact factor available is 5.5 (in 2014); the journal is peer-reviewed; and the acceptance rate is 31%. You go to Google Scholar and enter “J Clinical Psychiatry” and “early onset dementia,” and find several abstracts from the last 5 years. You don’t see any glaring problems with the research designs or the quality of the language. You might see papers published by some of the big names in the field. You look at the number of citations for each of the papers and note that several papers have been cited 100 times or more. (That’s a lot of citations.)
Now let’s consider another example with a different journal, Alzheimer Disease & Associated Disorders. Its IF for 2014 is 2.4; the journal is listed as peer-reviewed; and its acceptance rate is 50%. You find far fewer articles in your topic area, and the ones you find have far fewer citations than the other journal did.
In the first example, all the methods of evaluation converge on the conclusion that this is a reasonably reputable journal that publishes papers in your topic area. In the second example, all the methods converge to suggest that this journal is not as good a fit for your paper.
Should you automatically choose the first journal? In large part, the decision comes down to your own confidence in the soundness and importance of your research. Clearly, the first journal seems more desirable, but your chances of being accepted are also lower. The second journal is easier to publish in, but may not be read by as many people or reflect as well on the quality of your work. (Remember that you will likely be citing your own work in years to come, and you’d like the citations to be of the highest possible quality.) If you have time, the best course of action might be to submit to the first journal and await the outcome. If your manuscript is rejected without review, or if the required revisions seem impossible or unreasonable, you can always try the second (or a third or fourth) journal where your chances are better. After all, wouldn’t you want to belong to the best club that would have you as a member?
If you have a question or comment on this post, please share it in the comment area. You can also pose a question or comment by using our contact form.
Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). The Leiden Manifesto for research metrics. Nature, 520(7548), 429.
Alberts, B. (2013). Impact factor distortions. Science, 340(6134), 787-787.
van Dijk, D., Manor, O., & Carey, L. B. (2014). Publication metrics and success on the academic job market. Current Biology, 24(11), R516-R517.