Despite its significant effectiveness in adversarial training approaches to multidomain task-oriented dialogue systems, adversarial inverse reinforcement learning of the dialogue policy frequently fails to balance the performance of the reward estimator and policy generator. During the optimization process, the reward estimator frequently overwhelms the policy generator, resulting in excessively uninformative gradients. We propose the variational reward estimator bottleneck (VRB), which is a novel and effective regularization strategy that aims to constrain unproductive information flows between inputs and the reward estimator. The VRB focuses on capturing discriminative features by exploiting information bottleneck on mutual information. Quantitative analysis on a multidomain task-oriented dialogue dataset demonstrates that the VRB significantly outperforms previous studies.
CITATION STYLE
Park, J., Lee, C., Park, C., Kim, K., & Lim, H. (2021). Variational reward estimator bottleneck: Towards robust reward estimator for multidomain task-oriented dialogue. Applied Sciences (Switzerland), 11(14). https://doi.org/10.3390/app11146624
Mendeley helps you to discover research relevant for your work.