Testing the reliability of inter-rater reliability

19Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Analyses of learning often rely on coded data. One important aspect of coding is establishing reliability. Previous research has shown that the common approach for establishing coding reliability is seriously flawed in that it produces unacceptably high Type I error rates. This paper focuses on testing whether or not these error rates correspond to specific reliability metrics or a larger methodological problem. Our results show that the method for establishing reliability is not metric specific, and we suggest the adoption of new practices to control Type I error rates associated with establishing coding reliability.

Cite

CITATION STYLE

APA

Eagan, B., Brohinsky, J., Wang, J., & Shaffer, D. W. (2020). Testing the reliability of inter-rater reliability. In ACM International Conference Proceeding Series (pp. 454–461). Association for Computing Machinery. https://doi.org/10.1145/3375462.3375508

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free