Implementing a general framework for assessing interrater agreement in stata

127Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Despite its well-known weaknesses, researchers continuously choose the kappa coefficient (Cohen, 1960, Educational and Psychological Measurement 20: 37-46; Fleiss, 1971, Psychological Bulletin 76: 378-382) to quantify agreement among raters. Part of kappa’s persistent popularity seems to arise from a lack of available alternative agreement coefficients in statistical software packages such as Stata. In this article, I review Gwet’s (2014, Handbook of Inter-Rater Reliability) recently developed framework of interrater agreement coefficients. This framework extends several agreement coefficients to handle any number of raters, any number of rating categories, any level of measurement, and missing values. I introduce the kappaetc command, which implements this framework in Stata.

Cite

CITATION STYLE

APA

Klein, D. (2018). Implementing a general framework for assessing interrater agreement in stata. Stata Journal, 18(4), 871–901. https://doi.org/10.1177/1536867x1801800408

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free