Supersense tagging with inter-annotator disagreement

8Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Linguistic annotation underlies many successful approaches in Natural Language Processing (NLP), where the annotated corpora are used for training and evaluating supervised learners. The consistency of annotation limits the performance of supervised models, and thus a lot of effort is put into obtaining high-agreement annotated datasets. Recent research has shown that annotation disagreement is not random noise, but carries a systematic signal that can be used for improving the supervised learner. However, prior work was limited in scope, focusing only on part-of-speech tagging in a single language. In this paper we broaden the experiments to a semantic task (supersense tagging) using multiple languages. In particular, we analyse how systematic disagreement is for sense annotation, and we present a preliminary study of whether patterns of disagreements transfer across languages.

Cite

CITATION STYLE

APA

Alonso, H. M., Johannsen, A., & Plank, B. (2016). Supersense tagging with inter-annotator disagreement. In LAW 2016 - 10th Linguistic Annotation Workshop, held in conjuncion with ACL 2016 - Workshop Proceedings (pp. 43–48). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-1706

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free