Dual latent variable model for low-resource natural language generation in dialogue systems

10Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

Recent deep learning models have shown improving results to natural language generation (NLG) irrespective of providing sufficient annotated data. However, a modest training data may harm such models’ performance. Thus, how to build a generator that can utilize as much of knowledge from a low-resource setting data is a crucial issue in NLG. This paper presents a variational neural-based generation model to tackle the NLG problem of having limited labeled dataset, in which we integrate a variational inference into an encoder-decoder generator and introduce a novel auxiliary auto-encoding with an effective training procedure. Experiments showed that the proposed methods not only outperform the previous models when having sufficient training dataset but also show strong ability to work acceptably well when the training data is scarce.

Cite

CITATION STYLE

APA

Tran, V. K., & Nguyen, L. M. (2018). Dual latent variable model for low-resource natural language generation in dialogue systems. In CoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings (pp. 21–30). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k18-1003

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free