Wake-sleep variational autoencoders for language modeling

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Variational Autoencoders (VAEs) are known to easily suffer from the KL-vanishing problem when combining with powerful autoregressive models like recurrent neural networks (RNNs), which prohibits their wide application in natural language processing. In this paper, we tackle this problem by tearing the training procedure into two steps: learning effective mechanisms to encode and decode discrete tokens (wake step) and generalizing meaningful latent variables by reconstructing dreamed encodings (sleep step). The training pattern is similar to the wake-sleep algorithm: these two steps are trained alternatively until an equilibrium is achieved. We test our model in a language modeling task. The results demonstrate significant improvement over the current state-of-the-art latent variable models.

Cite

CITATION STYLE

APA

Shen, X., Su, H., Niu, S., & Klakow, D. (2017). Wake-sleep variational autoencoders for language modeling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10634 LNCS, pp. 405–414). Springer Verlag. https://doi.org/10.1007/978-3-319-70087-8_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free