Exploring Zero and Few-shot Techniques for Intent Classifcation

10Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Conversational NLU providers often need to scale to thousands of intent-classifcation models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classifcation approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classifcation using descriptions large language models (LLMs), and 4) parameter-effcient fne-tuning of instruction-fnetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-effcient fne-tuning using T-few recipe (Liu et al., 2022) on Flan-T5 (Chung et al., 2022) yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions is also very competitive.

Cite

CITATION STYLE

APA

Parikh, S., Tumbade, P., Vohra, Q., & Tiwari, M. (2023). Exploring Zero and Few-shot Techniques for Intent Classifcation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 744–751). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.71

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free