Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild (Extended Abstract)

2Citations
Citations of this article
631Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a method to learn 3D deformable object categories from raw single-view images, without external supervision. The method is based on an autoencoder that factors each input image into depth, albedo, viewpoint and illumination. In order to disentangle these components without supervision, we use the fact that many object categories have, at least approximately, a symmetric structure. We show that reasoning about illumination allows us to exploit the underlying object symmetry even if the appearance is not symmetric due to shading. Furthermore, we model objects that are probably, but not certainly, symmetric by predicting a symmetry probability map, learned end-to-end with the other components of the model. Our experiments show that this method can recover very accurately the 3D shape of human faces, cat faces and cars from single-view images, without any supervision or a prior shape model. Code and demo available at https://github.com/elliottwu/unsup3d.

Cite

CITATION STYLE

APA

Wu, S., Rupprecht, C., & Vedaldi, A. (2021). Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild (Extended Abstract). In IJCAI International Joint Conference on Artificial Intelligence (pp. 4854–4858). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/665

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free