Counterfactual off-policy training for neural dialogue generation

26Citations
Citations of this article
103Readers
Mendeley users who have this article in their library.

Abstract

Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses. In this paper, we propose to explore potential responses by counterfactual reasoning. Given an observed response, the counterfactual reasoning model automatically infers the outcome of an alternative policy that could have been taken. The resulting counterfactual response synthesized in hindsight is of higher quality than the response synthesized from scratch. Training on the counterfactual responses under the adversarial learning framework helps to explore the high-reward area of the potential response space. An empirical study on the DailyDialog dataset shows that our approach significantly outperforms the HRED model as well as the conventional adversarial learning approaches.

Cite

CITATION STYLE

APA

Zhu, Q., Zhang, W., Liu, T., & Wang, W. Y. (2020). Counterfactual off-policy training for neural dialogue generation. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 3438–3448). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.276

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free