Disentangled Representation Learning and Generation with Manifold Optimization

13Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

Abstract

Disentanglement is a useful property in representation learning, which increases the interpretability of generative models such as variational autoencoders (VAE), generative adversarial models, and their many vari-ants. Typically in such models, an increase in disentanglement performance is traded off with generation quality. In the context of latent space models, this work presents a representation learning framework that explicitly promotes disentanglement by encouraging orthogonal directions of variations. The proposed objective is the sum of an autoen-coder error term along with a principal component analysis reconstruction error in the feature space. This has an interpretation of a restricted kernel machine with the eigenvector matrix valued on the Stiefel man-ifold. Our analysis shows that such a construction promotes disentanglement by matching the principal directions in the latent space with the directions of orthogonal variation in data space. In an alternating minimization scheme, we use the Cayley ADAM algorithm, a stochastic optimization method on the Stiefel manifold along with the Adam opti-mizer. Our theoretical discussion and various experiments show that the proposed model is an improvement over many VAE variants in terms of both generation quality and disentangled representation learning.

Cite

CITATION STYLE

APA

Pandey, A., Fanuel, M., Schreurs, J., & Suykens, J. A. K. (2022). Disentangled Representation Learning and Generation with Manifold Optimization. Neural Computation, 34, 1–28. https://doi.org/10.1162/neco_a_01528

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free