Efficient dense point cloud object reconstruction using deformation vector fields

4Citations
Citations of this article
156Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Some existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with “deformation-fields”. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training we use a combination of per-view loss and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction.

Cite

CITATION STYLE

APA

Li, K., Pham, T., Zhan, H., & Reid, I. (2018). Efficient dense point cloud object reconstruction using deformation vector fields. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11216 LNCS, pp. 508–524). Springer Verlag. https://doi.org/10.1007/978-3-030-01258-8_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free