Measures of interrater agreement

157Citations
Citations of this article
292Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Kappa statistics is used for the assessment of agreement between two or more raters when the measurement scale is categorical. In this short summary, we discuss and interpret the key features of the kappa statistics, the impact of prevalence on the kappa statistics, and its utility in clinical research. We also introduce the weighted kappa when the outcome is ordinal and the intraclass correlation to assess agreement in an event the data are measured on a continuous scale. Copyright © 2010 by the international Association fot the Study of lung Cancer.

Cite

CITATION STYLE

APA

Mandrekar, J. N. (2011). Measures of interrater agreement. Journal of Thoracic Oncology, 6(1), 6–7. https://doi.org/10.1097/JTO.0b013e318200f983

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free