Boosting naturalness of language in task-oriented dialogues via adversarial training

2Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.

Abstract

The natural language generation (NLG) module in a task-oriented dialogue system produces user-facing utterances conveying required information. Thus, it is critical for the generated response to be natural and fluent. We propose to integrate adversarial training to produce more human-like responses. The model uses Straight-Through Gumbel- Softmax estimator for gradient computation. We also propose a two-stage training scheme to boost performance. Empirical results show that the adversarial training can effectively improve the quality of language generation in both automatic and human evaluations. For example, in the RNN-LG Restaurant dataset, our model AdvNLG outperforms the previous state-of-the-art result by 3.6% in BLEU.

Cite

CITATION STYLE

APA

Zhu, C. (2020). Boosting naturalness of language in task-oriented dialogues via adversarial training. In SIGDIAL 2020 - 21st Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 265–271). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.sigdial-1.33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free