ShapeCodes: Self-supervised Feature Learning by Lifting Views to Viewgrids

3Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We introduce an unsupervised feature learning approach that embeds 3D shape information into a single-view image representation. The main idea is a self-supervised training objective that, given only a single 2D image, requires all unseen views of the object to be predictable from learned features. We implement this idea as an encoder-decoder convolutional neural network. The network maps an input image of an unknown category and unknown viewpoint to a latent space, from which a deconvolutional decoder can best “lift” the image to its complete viewgrid showing the object from all viewing angles. Our class-agnostic training procedure encourages the representation to capture fundamental shape primitives and semantic regularities in a data-driven manner—without manual semantic labels. Our results on two widely-used shape datasets show (1) our approach successfully learns to perform “mental rotation” even for objects unseen during training, and (2) the learned latent space is a powerful representation for object recognition, outperforming several existing unsupervised feature learning methods.

Cite

CITATION STYLE

APA

Jayaraman, D., Gao, R., & Grauman, K. (2018). ShapeCodes: Self-supervised Feature Learning by Lifting Views to Viewgrids. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11220 LNCS, pp. 126–144). Springer Verlag. https://doi.org/10.1007/978-3-030-01270-0_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free