Class Incremental Learning for Intent Classification with Limited or No Old Data

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we explore class-incremental learning for intent classification (IC) in a setting with limited old data available. IC is the task of mapping user utterances to their corresponding intents. Even though class-incremental learning without storing the old data yields high potential of reducing human and computational resources in industry NLP model releases, to the best of our knowledge, it hasn't been studied for NLP classification tasks in the literature before. In this work, we compare several contemporary class-incremental learning methods, i.e., BERT warm start, L2, Elastic Weight Consolidation, RecAdam and Knowledge Distillation within two realistic class-incremental learning scenarios: one where only the previous model is assumed to be available, but no data corresponding to old classes, and one in which limited unlabeled data for old classes is assumed to be available. Our results indicate that among the investigated continual learning methods, Knowledge Distillation worked best for our class-incremental learning tasks, and adding limited unlabeled data helps the model in both adaptability and stability.

Cite

CITATION STYLE

APA

Paul, D., Sorokin, D., & Gaspers, J. (2022). Class Incremental Learning for Intent Classification with Limited or No Old Data. In EvoNLP 2022 - 1st Workshop on Ever Evolving NLP, Proceedings of the Workshop (pp. 16–25). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.evonlp-1.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free