The JBJS Peer-Review Scoring Scale: A valid, reliable instrument for measuring the quality of peer review reports

10Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many journals seek to evaluate the quality of reviews performed by their panel of reviewers. The purpose of this study is to determine if members of a journal editorial board can consistently and reliably use a single numeric scoring system to evaluate the quality of peer reviews. A retrospective analysis of 11 randomly selected manuscripts that had undergone external peer review by three reviewers was performed. Six had been rejected and five accepted. Each deputy editor was asked to score each of the reviews. The intraclass correlation was determined for each of the manuscripts to determine the consistency in grading. The intraclass correlation for 10 of the 11 manuscripts was above 0.87. This study demonstrates that an editorial board of deputy editors, without external training, can consistently and reliably grade reviews with excellent agreement.

Cite

CITATION STYLE

APA

Thompson, S. R., Agel, J., & Losina, E. (2016). The JBJS Peer-Review Scoring Scale: A valid, reliable instrument for measuring the quality of peer review reports. Learned Publishing, 29(1), 23–25. https://doi.org/10.1002/leap.1009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free