Independent Learning of Motion Parameters for Deep Visual Odometry

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Vision-based localization is one of the major aspects of industrial and space robotics. Though many sensing modalities exist for motion estimation, cameras have been used widely due to its availability and reduced cost. Visual odometry estimates the motion parameters of a camera through the images it captures. Multiple sensing modalities are fused to improve estimation accuracy with increased cost. With the success of deep learning architectures in the area of computer vision, one of the recent paradigm shift occurred in visual odometry is estimating motion using non-geometric schemes by the end-to-end manner. The different stages of the traditional visual odometry pipeline are estimated as a single function mapping input images to output 6 DoF pose of the camera. There are many ways to apply deep learning in visual odometry, one of the common techniques is through transfer learning. In this work, analysis has been done on traditional DeepVO and ResNetVO by incorporating a novel architecture splitting and independent learning scheme. The estimation results show the efficacy of the proposed algorithm.

Cite

CITATION STYLE

APA

Kottath, R., Kaw, R., Poddar, S., Bhondekar, A. P., & Karar, V. (2021). Independent Learning of Motion Parameters for Deep Visual Odometry. In Advances in Intelligent Systems and Computing (Vol. 1245, pp. 785–794). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-15-7234-0_74

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free