Deep disentangled representations for volumetric reconstruction

12Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a convolutional neural network for inferring a compact disentangled graphical description of objects from 2D images that can be used for volumetric reconstruction. The network comprises an encoder and a twin-tailed decoder. The encoder generates a disentangled graphics code. The first decoder generates a volume, and the second decoder reconstructs the input image using a novel training regime that allows the graphics code to learn a separate representation of the 3D object and a description of its lighting and pose conditions. We demonstrate this method by generating volumes and disentangled graphical descriptions from images and videos of faces and chairs.

Cite

CITATION STYLE

APA

Grant, E., Kohli, P., & van Gerven, M. (2016). Deep disentangled representations for volumetric reconstruction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9915 LNCS, pp. 266–279). Springer Verlag. https://doi.org/10.1007/978-3-319-49409-8_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free