ArticulatedFusion: Real-time reconstruction of motion, geometry and segmentation using a single depth camera

10Citations
Citations of this article
138Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera. Our approach fuses geometry frame by frame and uses a segmentation-enhanced node graph structure to drive the deformation of geometry in registration step. A two-level node motion optimization is proposed. The optimization space of node motions and the range of physically-plausible deformations are largely reduced by taking advantage of the articulated motion prior, which is solved by an efficient node graph segmentation method. Compared to previous fusion-based dynamic scene reconstruction methods, our experiments show robust and improved reconstruction results for tangential and occluded motions.

Author supplied keywords

Cite

CITATION STYLE

APA

Li, C., Zhao, Z., & Guo, X. (2018). ArticulatedFusion: Real-time reconstruction of motion, geometry and segmentation using a single depth camera. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11212 LNCS, pp. 324–340). Springer Verlag. https://doi.org/10.1007/978-3-030-01237-3_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free