Experiments with crowdsourced re-annotation of a POS tagging data set

41Citations
Citations of this article
119Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have largely assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks. © 2014 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Hovy, D., Plank, B., & Søgaard, A. (2014). Experiments with crowdsourced re-annotation of a POS tagging data set. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 2, pp. 377–382). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-2062

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free