Multi-Label Intent Detection via Contrastive Task Specialization of Sentence Encoders

6Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Deploying task-oriented dialog (TOD) systems for new domains and tasks requires natural language understanding models that are 1) resource-efficient and work under low-data regimes; 2) adaptable, efficient, and quick-to-train; 3) expressive and can handle complex TOD scenarios with multiple user intents in a single utterance. Motivated by these requirements, we introduce a novel framework for multi-label intent detection (mID): MULTI-CONVFIT (Multi-Label Intent Detection via Contrastive Conversational Fine-Tuning). While previous work on efficient single-label intent detection learns a classifier on top of a fixed sentence encoder (SE), we propose to 1) transform general-purpose SEs into task-specialized SEs via contrastive fine-tuning on annotated multi-label data, 2) where task specialization knowledge can be stored into lightweight adapter modules without updating the original parameters of the input SE, and then 3) we build improved mID classifiers stacked on top of fixed specialized SEs. Our main results indicate that MULTI-CONVFIT yields effective mID models, with large gains over non-specialized SEs reported across a spectrum of different mID datasets, both in low-data and high-data regimes.

Cite

CITATION STYLE

APA

Vulić, I., Casanueva, I., Spithourakis, G., Mondal, A., Wen, T. H., & Budzianowski, P. (2022). Multi-Label Intent Detection via Contrastive Task Specialization of Sentence Encoders. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 7544–7559). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.512

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free