Conversation Disentanglement with Bi-Level Contrastive Learning

5Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Conversation disentanglement aims to group utterances into detached sessions, which is a fundamental task in processing multi-party conversations. Existing methods have two main drawbacks. First, they overemphasize pairwise utterance relations but pay inadequate attention to the utterance-to-context relation modeling. Second, a huge amount of human annotated data is required for training, which is expensive to obtain in practice. To address these issues, we propose a general disentangle model based on bi-level contrastive learning. It brings closer utterances in the same session while encourages each utterance to be near its clustered session prototypes in the representation space. Unlike existing approaches, our disentangle model works in both supervised settings with labeled data and unsupervised settings when no such data is available. The proposed method achieves new state-of-the-art performance results on both settings across several public datasets.

Cite

CITATION STYLE

APA

Huang, C., Zhang, Z., Fei, H., & Liao, L. (2022). Conversation Disentanglement with Bi-Level Contrastive Learning. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 2985–2996). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.217

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free