Direct Image to Point Cloud Descriptors Matching for 6-DOF Camera Localization in Dense 3D Point Clouds

2Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel concept to directly match feature descriptors extracted from RGB images, with feature descriptors extracted from 3D point clouds. We use this concept to localize the position and orientation (pose) of the camera of a query image in dense point clouds. We generate a dataset of matching 2D and 3D descriptors, and use it to train a proposed Descriptor-Matcher algorithm. To localize a query image in a point cloud, we extract 2D key-points and descriptors from the query image. Then the Descriptor-Matcher is used to find the corresponding pairs 2D and 3D key-points by matching the 2D descriptors with the pre-extracted 3D descriptors of the point cloud. This information is used in a robust pose estimation algorithm to localize the query image in the 3D point cloud. Experiments demonstrate that directly matching 2D and 3D descriptors is not only a viable idea but can also be used for camera pose localization in dense 3D point clouds with high accuracy.

Cite

CITATION STYLE

APA

Nadeem, U., Jalwana, M. A. A. K., Bennamoun, M., Togneri, R., & Sohel, F. (2019). Direct Image to Point Cloud Descriptors Matching for 6-DOF Camera Localization in Dense 3D Point Clouds. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11954 LNCS, pp. 222–234). Springer. https://doi.org/10.1007/978-3-030-36711-4_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free