Crowd-in-the-loop: A hybrid approach for annotating semantic roles

11Citations
Citations of this article
94Readers
Mendeley users who have this article in their library.

Abstract

Crowdsourcing has proven to be an effective method for generating labeled data for a range of NLP tasks. However, multiple recent attempts of using crowdsourcing to generate gold-labeled training data for semantic role labeling (SRL) reported only modest results, indicating that SRL is perhaps too difficult a task to be effectively crowdsourced. In this paper, we postulate that while producing SRL annotation does require expert involvement in general, a large subset of SRL labeling tasks is in fact appropriate for the crowd. We present a novel workflow in which we employ a classifier to identify difficult annotation tasks and route each task either to experts or crowd workers according to their difficulties. Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality.

Cite

CITATION STYLE

APA

Wang, C., Akbik, A., Chiticariu, L., Li, Y., Xia, F., & Xu, A. (2017). Crowd-in-the-loop: A hybrid approach for annotating semantic roles. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1913–1922). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1205

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free