Transferable Persona-Grounded Dialogues via Grounded Minimal Edits

14Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.

Abstract

Grounded dialogue models generate responses that are grounded on certain concepts. Limited by the distribution of grounded dialogue data, models trained on such data face the transferability challenges in terms of the data distribution and the type of grounded concepts. To address the challenges, we propose the grounded minimal editing framework, which minimally edits existing responses to be grounded on the given concept. Focusing on personas, we propose Grounded Minimal Editor (GME), which learns to edit by disentangling and recombining persona-related and persona-agnostic parts of the response. To evaluate persona-grounded minimal editing, we present the PERSONAMINEDIT dataset, and experimental results show that GME outperforms competitive baselines by a large margin. To evaluate the transferability, we experiment on the test set of BLEND-EDSKILLTALK and show that GME can edit dialogue models' responses to largely improve their persona consistency while preserving the use of knowledge and empathy.

Cite

CITATION STYLE

APA

Wu, C. H., Zheng, Y., Mao, X., & Huang, M. (2021). Transferable Persona-Grounded Dialogues via Grounded Minimal Edits. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2368–2382). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.183

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free