A good sample is hard to find: Noise injection sampling and self-training for neural language generation models

27Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks (DNN) are quickly becoming the de facto standard modeling method for many natural language generation (NLG) tasks. In order for such models to truly be useful, they must be capable of correctly generating utterances for novel meaning representations (MRs) at test time. In practice, even sophisticated DNNs with various forms of semantic control frequently fail to generate utterances faithful to the input MR. In this paper, we propose an architecture agnostic self-training method to sample novel MR/text utterance pairs to augment the original training data. Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding are capable of generating semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.

Cite

CITATION STYLE

APA

Kedzie, C., & McKeown, K. (2019). A good sample is hard to find: Noise injection sampling and self-training for neural language generation models. In INLG 2019 - 12th International Conference on Natural Language Generation, Proceedings of the Conference (pp. 584–593). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/W19-8672

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free