Measuring nominal scale agreement among many raters

7.4kCitations
Citations of this article
2.2kReaders
Mendeley users who have this article in their library.
Get full text

Abstract

Introduced the statistic kappa to measure nominal scale agreement between a fixed pair of raters. Kappa was generalized to the case where each of a sample of 30 patients was rated on a nominal scale by the same number of psychiatrist raters (n = 6), but where the raters rating 1 s were not necessarily the same as those rating another. Large sample standard errors were derived. (PsycINFO Database Record (c) 2006 APA, all rights reserved). © 1971 American Psychological Association.

Cite

CITATION STYLE

APA

Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382. https://doi.org/10.1037/h0031619

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free