Ranking versus rating in peer review of research grant applications

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The allocation of public funds for research has been predominantly based on peer review where reviewers are asked to rate an application on some form of ordinal scale from poor to excellent. Poor reliability and bias of peer review rating has led funding agencies to experiment with different approaches to assess applications. In this study, we compared the reliability and potential sources of bias associated with application rating with those of application ranking in 3,156 applications to the Canadian Institutes of Health Research. Ranking was more reliable than rating and less susceptible to the characteristics of the review panel, such as level of expertise and experience, for both reliability and potential sources of bias. However, both rating and ranking penalized early career investigators and favoured older applicants. Sex bias was only evident for rating and only when the applicant’s H-index was at the lower end of the H-index distribution. We conclude that when compared to rating, ranking provides a more reliable assessment of the quality of research applications, is not as influenced by reviewer expertise or experience, and is associated with fewer sources of bias. Research funding agencies should consider adopting ranking methods to improve the quality of funding decisions in health research.

Cite

CITATION STYLE

APA

Tamblyn, R., Girard, N., Hanley, J., Habib, B., Mota, A., Khan, K. M., & Ardern, C. L. (2023). Ranking versus rating in peer review of research grant applications. PLoS ONE, 18(10 OCTOBER). https://doi.org/10.1371/journal.pone.0292306

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free