Abstract
This paper presents a case study of a difficult and important categorical annotation task (word sense) to demonstrate a probabilistic annotation model applied to crowdsourced data. It is argued that standard (chance-adjusted) agreement levels are neither necessary nor sufficient to ensure high quality gold standard labels. Compared to conventional agreement measures, application of an annotation model to instances with crowdsourced labels yields higher quality labels at lower cost.
Cite
CITATION STYLE
Passonneau, R. J., & Carpenter, B. (2020). The benefits of a model of annotation. In LAW 2013 and ID 2013 - 7th Linguistic Annotation Workshop and Interoperability with Discourse, Proceedings of the Workshop (pp. 187–195). Association for Computational Linguistics (ACL). https://doi.org/10.1162/tacl_a_00185
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.