Model-based 3D scene reconstruction using a moving RGB-D camera

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a scalable model-based approach for 3D scene reconstruction using a moving RGB-D camera. The proposed approach enhances the accuracy of pose estimation due to exploiting the rich information in the multi-channel RGB-D image data. Our approach has lots of advantages on the reconstruction quality of the 3D scene as compared with the conventional approaches using sparse features for pose estimation. The pre-learned imagebased 3D model provides multiple templates for sampled views of the model, which are used to estimate the poses of the frames in the input RGB-D video without the need of a priori internal and external camera parameters. Through template-to-frame registration, the reconstructed 3D scene can be loaded in an augmented reality (AR) environment to facilitate displaying, interaction, and rendering of an image-based AR application. Finally, we verify the ability of the established reconstruction system on publicly available benchmark datasets, and compare it with the sate-of-the-art pose estimation algorithms. The results indicate that our approach outperforms the compared methods on the accuracy of pose estimation.

Cite

CITATION STYLE

APA

Cheng, S. C., Su, J. Y., Chen, J. M., & Hsieh, J. W. (2017). Model-based 3D scene reconstruction using a moving RGB-D camera. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10132 LNCS, pp. 214–225). Springer Verlag. https://doi.org/10.1007/978-3-319-51811-4_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free