Variational reward estimator bottleneck: Towards robust reward estimator for multidomain task-oriented dialogue

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Despite its significant effectiveness in adversarial training approaches to multidomain task-oriented dialogue systems, adversarial inverse reinforcement learning of the dialogue policy frequently fails to balance the performance of the reward estimator and policy generator. During the optimization process, the reward estimator frequently overwhelms the policy generator, resulting in excessively uninformative gradients. We propose the variational reward estimator bottleneck (VRB), which is a novel and effective regularization strategy that aims to constrain unproductive information flows between inputs and the reward estimator. The VRB focuses on capturing discriminative features by exploiting information bottleneck on mutual information. Quantitative analysis on a multidomain task-oriented dialogue dataset demonstrates that the VRB significantly outperforms previous studies.

Cite

CITATION STYLE

APA

Park, J., Lee, C., Park, C., Kim, K., & Lim, H. (2021). Variational reward estimator bottleneck: Towards robust reward estimator for multidomain task-oriented dialogue. Applied Sciences (Switzerland), 11(14). https://doi.org/10.3390/app11146624

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free