How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks

  • Sønderby C
  • Raiko T
  • Maaløe L
  • et al.
ISSN: 10495258
378Citations
Citations of this article
187Readers
Mendeley users who have this article in their library.

Abstract

Variational autoencoders are a powerful framework for unsupervised learning. However, previous work has been restricted to shallow models with one or two layers of fully factorized stochastic latent variables, limiting the flexibility of the latent representation. We propose three advances in training algorithms of variational autoencoders, for the first time allowing to train deep models of up to five stochastic layers, (1) using a structure similar to the Ladder network as the inference model, (2) warm-up period to support stochastic units staying active in early training, and (3) use of batch normalization. Using these improvements we show state-of-the-art log-likelihood results for generative modeling on several benchmark datasets.

Cite

CITATION STYLE

APA

Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., & Winther, O. (2016). How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks. Advances in Neural Information Processing Systems, 0, 3745–3753. Retrieved from https://arxiv.org/abs/1602.02282v1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free