Affine variational autoencoders

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Variational autoencoders (VAEs) have in recent years become one of the most powerful approaches to learning useful latent representations of data in an unsupervised manner. However, a major challenge with VAEs is that they have tremendous difficulty in generalizing to data that deviate from the training set (e.g., perturbed image variants). Normally data augmentation is leveraged to overcome this limitation; however, this is not only computational expensive but also necessitates the construction of more complex models. In this study, we introduce the notion of affine variational autoencoders (AVAEs), which extends upon the conventional VAE architecture through the introduction of affine layers. More specifically, within the AVAE architecture an affine layer perturbs the input image prior to the encoder, and a second affine layer performs an inverse perturbation to the output of the decoder. The parameters of the affine layers are learned to enable the AVAE to encode images at canonical perturbations, resulting in a better reconstruction and a disentangled latent space without the need for data augmentation or the use of more complex models. Experimental results demonstrate the efficacy of the proposed VAE architecture for generalizing to images in the MNIST validation set under affine perturbations without the need for data augmentation, demonstrating significantly reduced loss when compared to conventional VAEs.

Cite

CITATION STYLE

APA

Bidart, R., & Wong, A. (2019). Affine variational autoencoders. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11662 LNCS, pp. 461–472). Springer Verlag. https://doi.org/10.1007/978-3-030-27202-9_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free