Learning a context-dependent switching strategy for robust visual odometry

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many applications for robotic systems require the systems to traverse diverse, unstructured environments. State estimation with Visual Odometry (VO) in these applications is challenging because there is no single algorithm that performs well across all environments and situations. The unique trade-offs inherent to each algorithm mean different algorithms excel in different environments. We develop a method to increase robustness in state estimation by using an ensemble of VO algorithms. The method combines the estimates by dynamically switching to the best algorithm for the current context, according to a statistical model of VO estimate errors. The model is a Random Forest regressor that is trained to predict the accuracy of each algorithm as a function of different features extracted from the sensory input. We evaluate our method in a dataset of consisting of four unique environments and eight runs, totaling over 25min of data. Our method reduces the mean translational relative pose error by 3.5% and the angular error by 4.3% compared to the single best odometry algorithm. Compared to the poorest performing odometry algorithm, our method reduces the mean translational error by 39.4% and the angular error by 20.1%.

Cite

CITATION STYLE

APA

Holtz, K., Maturana, D., & Scherer, S. (2016). Learning a context-dependent switching strategy for robust visual odometry. In Springer Tracts in Advanced Robotics (Vol. 113, pp. 249–263). Springer Verlag. https://doi.org/10.1007/978-3-319-27702-8_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free