Variational cross-domain natural language generation for spoken dialogue systems

9Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.

Abstract

Cross-domain natural language generation (NLG) is still a difficult task within spoken dialogue modelling. Given a semantic representation provided by the dialogue manager, the language generator should generate sentences that convey desired information. Traditional template-based generators can produce sentences with all necessary information, but these sentences are not sufficiently diverse. With RNN-based models, the diversity of the generated sentences can be high, however, in the process some information is lost. In this work, we improve an RNN-based generator by considering latent information at the sentence level during generation using the conditional variational autoencoder architecture. We demonstrate that our model outperforms the original RNN-based generator, while yielding highly diverse sentences. In addition, our model performs better when the training data is limited.

Cite

CITATION STYLE

APA

Tseng, B. H., Kreyssig, F., Budzianowski, P., Casanueva, I., Wu, Y. C., Ultes, S., & Gašić, M. (2018). Variational cross-domain natural language generation for spoken dialogue systems. In SIGDIAL 2018 - 19th Annual Meeting of the Special Interest Group on Discourse and Dialogue - Proceedings of the Conference (pp. 338–343). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-5039

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free