Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images

9Citations
Citations of this article
95Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views. Previous work on learning shape reconstruction from multiple views uses discrete representations such as point clouds or voxels, while continuous surface generation approaches lack multi-view consistency. We address these issues by designing neural networks capable of generating high-quality parametric 3D surfaces which are also consistent between views. Furthermore, the generated 3D surfaces preserve accurate image pixel to 3D surface point correspondences, allowing us to lift texture information to reconstruct shapes with rich geometry and appearance. Our method is supervised and trained on a public dataset of shapes from common object categories. Quantitative results indicate that our method significantly outperforms previous work, while qualitative results demonstrate the high quality of our reconstructions.

Cite

CITATION STYLE

APA

Lei, J., Sridhar, S., Guerrero, P., Sung, M., Mitra, N., & Guibas, L. J. (2020). Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12363 LNCS, pp. 121–138). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58523-5_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free