Disentangling Factors of Variation with Cycle-Consistent Variational Auto-encoders

22Citations
Citations of this article
119Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Generative models that learn disentangled representations for different factors of variation in an image can be very useful for targeted data augmentation. By sampling from the disentangled latent subspace of interest, we can efficiently generate new data necessary for a particular task. Learning disentangled representations is a challenging problem, especially when certain factors of variation are difficult to label. In this paper, we introduce a novel architecture that disentangles the latent space into two complementary subspaces by using only weak supervision in form of pairwise similarity labels. Inspired by the recent success of cycle-consistent adversarial architectures, we use cycle-consistency in a variational auto-encoder framework. Our non-adversarial approach is in contrast with the recent works that combine adversarial training with auto-encoders to disentangle representations. We show compelling results of disentangled latent subspaces on three datasets and compare with recent works that leverage adversarial training.

Cite

CITATION STYLE

APA

Jha, A. H., Anand, S., Singh, M., & Veeravasarapu, V. (2018). Disentangling Factors of Variation with Cycle-Consistent Variational Auto-encoders. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11207 LNCS, pp. 829–845). Springer Verlag. https://doi.org/10.1007/978-3-030-01219-9_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free