RnR: Retrieval and Reprojection Learning Model for Camera Localization

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Camera localization is an essential technique in many applications, such as robot navigation, mixed reality, and unmanned vehicle. We are committed to solving the problem of predicting the 6-DoF pose of cameras from a single color image in a given three-Dimensional (3D) environment. In this paper, we proposed a robust learning model for it. Basically, our proposed methodology consists of two steps: image retrieval and space reprojection. The former is in charge of simultaneous localization and mapping based on pre-captured reference images that rely on the correspondence between pixel points and scene coordinates; whereas the latter carries out camera calibration between the 2D image plane and the 3D scene. Given a two-Dimensional (2D) image, the initial localization is accomplished rapidly by matching a reference image using Siamese networks. More precise localization is achieved by camera calibration between the 2D image and the 3D scene using a fully convolutional network. The experimental results on the public dataset show that our model is more robust and expandable than the previous methods. At the end of this paper, we also apply the system to Unmanned Aerial Vehicle (UAV) localization and achieve good results.

Cite

CITATION STYLE

APA

Yang, S., & Shi, D. (2021). RnR: Retrieval and Reprojection Learning Model for Camera Localization. IEEE Access, 9, 34626–34634. https://doi.org/10.1109/ACCESS.2021.3061634

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free