Learning landmark selection policies for mapping unknown environments

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In general, a mobile robot that operates in unknown environments has to maintain a map and has to determine its own location given themap. This introduces significant computational and memory constraints for most autonomous systems, especially for lightweight robots such as humanoids or flying vehicles. In this paper, we present a universal approach for learning a landmark selection policy that allows a robot to discard landmarks that are not valuable for its current navigation task. This enables the robot to reduce the computational burden and to carry out its task more efficiently by maintaining only the important landmarks. Our approach applies an unscented Kalman filter for addressing the simultaneous localization and mapping problem and uses Monte-Carlo reinforcement learning to obtain the selection policy. In addition to that, we present a technique to compress learned policies without introducing a performance loss. In this way, our approach becomes applicable on systems with constrained memory resources. Based on real world and simulation experiments, we show that the learned policies allow for efficient robot navigation and outperform handcrafted strategies.We furthermore demonstrate that the learned policies are not only usable in a specific scenario but can also be generalized towards environments with varying properties. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Strasdat, H., Stachniss, C., & Burgard, W. (2011). Learning landmark selection policies for mapping unknown environments. In Springer Tracts in Advanced Robotics (Vol. 70, pp. 483–499). https://doi.org/10.1007/978-3-642-19457-3_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free