Inter-coder agreement for computational linguistics

1.2kCitations
Citations of this article
806Readers
Mendeley users who have this article in their library.

Abstract

This article is a survey of methods for measuring agreement among corpus annotators. It exposes the mathematics and underlying assumptions of agreement coefficients, covering Krippendorff's alpha as well as Scott's pi and Cohen's kappa; discusses the use of coefficients in several annotation tasks; and argues that weighted, alpha-like coefficients, traditionally less used than kappa-like measures in computational linguistics, may be more appropriate for many corpus annotation tasks-but that their use makes the interpretation of the value of the coefficient even harder. © 2008 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Artstein, R., & Poesio, M. (2008). Inter-coder agreement for computational linguistics. Computational Linguistics. MIT Press Journals. https://doi.org/10.1162/coli.07-034-R2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free