Stopping criteria for active learning of named entity recognition

52Citations
Citations of this article
153Readers
Mendeley users who have this article in their library.

Abstract

Active learning is a proven method for reducing the cost of creating the training sets that are necessary for statistical NLP. However, there has been little work on stopping criteria for active learning. An operational stopping criterion is necessary to be able to use active learning in NLP applications. We investigate three different stopping criteria for active learning of named entity recognition (NER) and show that one of them, gradient-based stopping, (i) reliably stops active learning, (ii) achieves near-optimal NER performance, (iii) and needs only about 20% as much training data as exhaustive labeling. © 2008. Licensed under the Creative Commons.

Cite

CITATION STYLE

APA

Laws, F., & Schütze, H. (2008). Stopping criteria for active learning of named entity recognition. In Coling 2008 - 22nd International Conference on Computational Linguistics, Proceedings of the Conference (Vol. 1, pp. 465–472). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1599081.1599140

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free