Camera relocalization is a challenging task, especially based on the sparse 3D map or keyframes. In this paper, we present an accurate method for RGB camera relocalization in the case of a very sparse 3D map built by limited keyframes. The core of our approach is a top-To-down feature matching strategy to provide a set of accurate 2D-To-3D matches. Specifically, we first use the landmark-based place recognition method to generate from keyframes the images similar to the current view along with the set of pairwise matched landmarks. This step constrains the 3D model points that can be matched with the current view. Then, the points are matched within the landmark pairs and combined afterward. This is in contrast to the conventional methods of feature matching that typically match points between the entire images and the whole 3D map, which, as a result, may not be robust to large viewpoint changes, the main challenge of the relocalization based on the sparse map. After feature matching, the camera pose is calculated by an efficient novel Perspective-n-Points (PnP) algorithm. We conduct experiments on challenging datasets to demonstrate that the camera poses estimated by our method based on the sparse 3D point cloud are more accurate than the classical methods using the dense map or a large number of training images.
CITATION STYLE
Yang, B., Xu, X., & Li, J. (2019). Keyframe-Based Camera Relocalization Method Using Landmark and Keypoint Matching. IEEE Access, 7, 86854–86862. https://doi.org/10.1109/ACCESS.2019.2925121
Mendeley helps you to discover research relevant for your work.