A-Kappa: A measure of Agreement among Multiple Raters

  • Gautam S
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Simulation results created values by which interpretation of agreement among raters should be guided. For Kappa: > .75 = almost perfect agreement, > .45 = substantial agreement, > .2 = moderate agreement, < .2 = fair agreement, < .0 = poor agreement.

Cite

CITATION STYLE

APA

Gautam, S. (2021). A-Kappa: A measure of Agreement among Multiple Raters. Journal of Data Science, 12(4), 697–716. https://doi.org/10.6339/jds.201410_12(4).0007

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free