Adaptive dialog policy learning with hindsight and user modeling

7Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reinforcement learning methods have been used to compute dialog policies from language-based interaction experiences. Efficiency is of particular importance in dialog policy learning, because of the considerable cost of interacting with people, and the very poor user experience from low-quality conversations. Aiming at improving the efficiency of dialog policy learning, we develop algorithm LHUA (Learning with Hindsight, User modeling, and Adaptation) that, for the first time, enables dialog agents to adaptively learn with hindsight from both simulated and real users. Simulation and hindsight provide the dialog agent with more experience and more (positive) reinforcements respectively. Experimental results suggest that, in success rate and policy quality, LHUA outperforms competitive baselines from the literature, as well as its no-simulation, no-adaptation, and no-hindsight counterparts.

Cite

CITATION STYLE

APA

Cao, Y., Lu, K., Chen, X., & Zhang, S. (2020). Adaptive dialog policy learning with hindsight and user modeling. In SIGDIAL 2020 - 21st Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 329–338). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.sigdial-1.40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free