3D line segment based model generation by RGB-D camera for camera pose estimation

6Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a novel method for generating 3D line segment based model from an image sequence taken with a RGBD camera. Constructing 3D geometrical representation by 3D model is essential for model based camera pose estimation that can be performed by corresponding 2D features in images with 3D features of the captured scene. While point features are mostly used as such features for conventional camera pose estimation, we aim to use line segment features for improving the performance of the camera pose estimation. In this method, using RGB images and depth images of two continuous frames, 2D line segments from the current frame and 3D line segments from the previous frame are corresponded. The 2D-3D line segment correspondences provide camera pose of the current frame. All of 2D line segments are finally back projected to the world coordinate based on the estimated camera pose for generating 3D line segment based model of the target scene. In experiments, we confirmed that the proposed method can successfully generate line segment based models, while 3D models based on the point features often fail to successfully represent the target scene.

Cite

CITATION STYLE

APA

Nakayama, Y., Saito, H., Shimizu, M., & Yamaguchi, N. (2015). 3D line segment based model generation by RGB-D camera for camera pose estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9010, pp. 459–472). Springer Verlag. https://doi.org/10.1007/978-3-319-16634-6_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free