Deep Global-Relative Networks for End-to-End 6-DoF Visual Localization and Odometry

14Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global sub-networks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.

Cite

CITATION STYLE

APA

Lin, Y., Liu, Z., Huang, J., Wang, C., Du, G., Bai, J., & Lian, S. (2019). Deep Global-Relative Networks for End-to-End 6-DoF Visual Localization and Odometry. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11671 LNAI, pp. 454–467). Springer Verlag. https://doi.org/10.1007/978-3-030-29911-8_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free