Persuasion dialogue system reflects the machine's ability to make strategic moves beyond verbal communication, and therefore differentiates itself from task-oriented or open-domain dialogues and has its own unique values. However, the repetition and inconsistency problems still persist in dialogue response generation and could substantially impact user experience and impede the persuasion outcome. Besides, although reinforcement learning (RL) approaches have achieved big success in strategic tasks such as games, it requires a sophisticated user simulator to provide real-time feedback to the dialogue system, which limits the application of RL on persuasion dialogues. To address these issues towards a better persuasion dialogue system, we apply RL to refine a language model baseline without user simulators, and distill sentence-level information about repetition, inconsistency, and task relevance through rewards. Moreover, to better accomplish the persuasion task, the model learns from human demonstration to imitate human persuasion behavior and selects the most persuasive responses. Experiments show that our model outperforms previous state-of-the-art dialogue models on both automatic metrics and human evaluation results on a donation persuasion task, and generates more diverse, consistent and persuasive conversations according to the user feedback. The code is available at https://github. com/wyshi/consistency.
CITATION STYLE
Shi, W., Li, Y., Sahay, S., & Yu, Z. (2021). Refine and Imitate: Reducing Repetition and Inconsistency in Persuasion Dialogues via Reinforcement Learning and Human Demonstration. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 3478–3492). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.295
Mendeley helps you to discover research relevant for your work.