CausalDialogue: Modeling Utterance-level Causality in Conversations

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Despite their widespread adoption, neural conversation models have yet to exhibit natural chat capabilities with humans. In this research, we examine user utterances as causes and generated responses as effects, recognizing that changes in a cause should produce a different effect. To further explore this concept, we have compiled and expanded upon a new dataset called CausalDialogue through crowdsourcing. This dataset includes multiple cause-effect pairs within a directed acyclic graph (DAG) structure. Our analysis reveals that traditional loss functions struggle to effectively incorporate the DAG structure, leading us to propose a causality-enhanced method called Exponential Maximum Average Treatment Effect (ExMATE) to enhance the impact of causality at the utterance level in training neural conversation models. To evaluate the needs of considering causality in dialogue generation, we built a comprehensive benchmark on CausalDialogue dataset using different models, inference, and training methods. Through experiments, we find that a causality-inspired loss like ExMATE can improve the diversity and agility of conventional loss function and there is still room for improvement to reach human-level quality on this new dataset.

Cite

CITATION STYLE

APA

Tuan, Y. L., Albalak, A., Xu, W., Saxon, M., Pryor, C., Getoor, L., & Wang, W. Y. (2023). CausalDialogue: Modeling Utterance-level Causality in Conversations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 12506–12522). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.792

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free