Simulation results created values by which interpretation of agreement among raters should be guided. For Kappa: > .75 = almost perfect agreement, > .45 = substantial agreement, > .2 = moderate agreement, < .2 = fair agreement, < .0 = poor agreement.
CITATION STYLE
Gautam, S. (2021). A-Kappa: A measure of Agreement among Multiple Raters. Journal of Data Science, 12(4), 697–716. https://doi.org/10.6339/jds.201410_12(4).0007
Mendeley helps you to discover research relevant for your work.