New Intent Discovery with Pre-training and Contrastive Learning

45Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.

Abstract

New intent discovery aims to uncover novel intent categories from user utterances to expand the set of supported intent classes. It is a critical task for the development and service expansion of a practical dialogue system. Despite its importance, this problem remains under-explored in the literature. Existing approaches typically rely on a large amount of labeled utterances and employ pseudo-labeling methods for representation learning and clustering, which are label-intensive, inefficient, and inaccurate. In this paper, we provide new solutions to two important research questions for new intent discovery: (1) how to learn semantic utterance representations and (2) how to better cluster utterances. Particularly, we first propose a multi-task pre-training strategy to leverage rich unlabeled data along with external labeled data for representation learning. Then, we design a new contrastive loss to exploit self-supervisory signals in unlabeled data for clustering. Extensive experiments on three intent recognition benchmarks demonstrate the high effectiveness of our proposed method, which outperforms state-of-the-art methods by a large margin in both unsupervised and semi-supervised scenarios. The source code will be available at https://github.com/zhang-yu-wei/MTP-CLNN.

Cite

CITATION STYLE

APA

Zhang, Y., Zhang, H., Zhan, L. M., Wu, X. M., & Lam, A. Y. S. (2022). New Intent Discovery with Pre-training and Contrastive Learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 256–269). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free