Measuring Annotator Agreement Generally across Complex Structured, Multi-object, and Free-text Annotation Tasks

20Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

When annotators label data, a key metric for quality assurance is inter-annotator agreement (IAA): the extent to which annotators agree on their labels. Though many IAA measures exist for simple categorical and ordinal labeling tasks, relatively little work has considered more complex labeling tasks, such as structured, multi-object, and free-text annotations. Krippendorff's a, best known for use with simpler labeling tasks, does have a distance-based formulation with broader applicability, but little work has studied its efficacy and consistency across complex annotation tasks. We investigate the design and evaluation of IAA measures for complex annotation tasks, with evaluation spanning seven diverse tasks: image bounding boxes, image keypoints, text sequence tagging, ranked lists, free text translations, numeric vectors, and syntax trees. We identify the difficulty of interpretability and the complexity of choosing a distance function as key obstacles in applying Krippendorff's a generally across these tasks. We propose two novel, more interpretable measures, showing they yield more consistent IAA measures across tasks and annotation distance functions.

Cite

CITATION STYLE

APA

Braylan, A., Alonso, O., & Lease, M. (2022). Measuring Annotator Agreement Generally across Complex Structured, Multi-object, and Free-text Annotation Tasks. In WWW 2022 - Proceedings of the ACM Web Conference 2022 (pp. 1720–1730). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485447.3512242

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free