Reliability of journal impact factor rankings

63Citations
Citations of this article
137Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Background. Journal impact factors and their ranks are used widely by journals, researchers, and research assessment exercises. Methods. Based on citations to journals in research and experimental medicine in 2005, Bayesian Markov chain Monte Carlo methods were used to estimate the uncertainty associated with these journal performance indicators. Results. Intervals representing plausible ranges of values for journal impact factor ranks indicated that most journals cannot be ranked with great precision. Only the top and bottom few journals could place any confidence in their rank position. Intervals were wider and overlapping for most journals. Conclusion. Decisions placed on journal impact factors are potentially misleading where the uncertainty associated with the measure is ignored. This article proposes that caution should be exercised in the interpretation of journal impact factors and their ranks, and specifically that a measure of uncertainty should be routinely presented alongside the point estimate. © 2007 Greenwood; licensee BioMed Central Ltd.

Cite

CITATION STYLE

APA

Greenwood, D. C. (2007). Reliability of journal impact factor rankings. BMC Medical Research Methodology, 7. https://doi.org/10.1186/1471-2288-7-48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free