Assessing the costs of sampling methods in active learning for annotation

28Citations
Citations of this article
89Readers
Mendeley users who have this article in their library.

Abstract

Traditional Active Learning (AL) techniques assume that the annotation of each datumcosts the same. This is not the case when annotating sequences; some sequences will take longer than others. We show that the AL technique which performs best depends on how cost is measured. Applying an hourly cost model based on the results of an annotation user study, we approximate the amount of time necessary to annotate a given sentence. This model allows us to evaluate the effectiveness of AL sampling methods in terms of time spent in annotation. We acheive a 77% reduction in hours from a random baseline to achieve 96.5% tag accuracy on the Penn Treebank. More significantly, we make the case for measuring cost in assessing AL methods. © 2008 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Haertel, R., Ringger, E., Seppi, K., Carroll, J., & McClanahan, P. (2008). Assessing the costs of sampling methods in active learning for annotation. In ACL-08: HLT - 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 65–68). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1557690.1557708

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free