Deep monocular visual odometry for ground vehicle

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Monocular visual odometry, with the ability to help robots to locate themselves in unexplored environments, has been a crucial research problem in robotics. Though the existed learning-based end- to-end methods can reduce engineering efforts such as accurate camera calibration and tedious case-by-case parameter tuning, the accuracy is still limited. One of the main reasons is that previous works aim to learn six- degrees-of-freedom motions despite the constrained motion of a ground vehicle by its mechanical structure and dynamics. To push the limit, we analyze the motion pattern of a ground vehicle and focus on learning two-degrees-of-freedom motions by proposed motion focusing and decoupling. The experiments on KITTI dataset show that the proposed motion focusing and decoupling approach can improve the visual odometry performance by reducing the relative pose error. Moreover, with the dimension reduction of the learning objective, our network is much lighter with only four convolution layers, which can quickly converge during the training stage and run in real-time at over 200 frames per second during the testing stage.

Cite

CITATION STYLE

APA

Wang, X., & Zhang, H. (2020). Deep monocular visual odometry for ground vehicle. IEEE Access, 8, 175220–175229. https://doi.org/10.1109/ACCESS.2020.3025557

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free