Few-Shot Class-Incremental Learning for Named Entity Recognition

25Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.

Abstract

Previous work of class-incremental learning for Named Entity Recognition (NER) relies on the assumption that there exists abundance of labeled data for the training of new classes. In this work, we study a more challenging but practical problem, i.e., few-shot class-incremental learning for NER, where an NER model is trained with only few labeled samples of the new classes, without forgetting knowledge of the old ones. To alleviate the problem of catastrophic forgetting in few-shot class-incremental learning, we generate synthetic data of the old classes using the trained NER model, augmenting the training of new classes. We further develop a framework that distills from the NER model from previous steps with both synthetic data, and real data from the current training set. Experimental results show that our approach achieves significant improvements over existing baselines.

Cite

CITATION STYLE

APA

Wang, R., Yu, T., Zhao, H., Kim, S., Mitra, S., Zhang, R., & Henao, R. (2022). Few-Shot Class-Incremental Learning for Named Entity Recognition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 571–582). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free