Learning Dialogue Representations from Consecutive Utterances

19Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Learning high-quality dialogue representations is essential for solving a variety of dialogue-oriented tasks, especially considering that dialogue systems often suffer from data scarcity. In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks. DSE learns from dialogues by taking consecutive utterances of the same dialogue as positive pairs for contrastive learning. Despite its simplicity, DSE achieves significantly better representation capability than other dialogue representation and universal sentence representation models. We evaluate DSE on five downstream dialogue tasks that examine dialogue representation at different semantic granularities. Experiments in few-shot and zero-shot settings show that DSE outperforms baselines by a large margin. For example, it achieves 13% average performance improvement over the strongest unsupervised baseline in 1-shot intent classification on 6 datasets. 2 We also provide analyses on the benefits and limitations of our model.

Cite

CITATION STYLE

APA

Zhou, Z., Zhang, D., Xiao, W., Dingwall, N., Ma, X., Arnold, A. O., & Xiang, B. (2022). Learning Dialogue Representations from Consecutive Utterances. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 754–768). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free