Strategies to Improve Few-shot Learning for Intent Classification and Slot-Filling

4Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Intent classification (IC) and slot filling (SF) are two fundamental tasks in modern Natural Language Understanding (NLU) systems. Collecting and annotating large amounts of data to train deep learning models for such systems are not scalable. This problem can be addressed by learning from few examples using fast supervised meta-learning techniques such as prototypical networks. In this work, we systematically investigate how contrastive learning and data augmentation methods can benefit these existing meta-learning pipelines for jointly modelled IC/SF tasks. Through extensive experiments across standard IC/SF benchmarks (SNIPS and ATIS), we show that our proposed approaches outperform standard meta-learning methods: contrastive losses as a regularizer in conjunction with prototypical networks consistently outperform the existing state-of-the-art for both IC and SF tasks, while data augmentation strategies primarily improve few-shot IC by a significant margin.

Cite

CITATION STYLE

APA

Basu, S., Chong, K. lp K., Sharaf, A., Fischer, A., Rohra, V., Amoake, M., … Han, B. (2022). Strategies to Improve Few-shot Learning for Intent Classification and Slot-Filling. In SUKI 2022 - Workshop on Structured and Unstructured Knowledge Integration, Proceedings of the Workshop (pp. 17–25). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.suki-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free