Learning indoor robot navigation using visual and sensorimotor map information

10Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

As a fundamental research topic, autonomous indoor robot navigation continues to be a challenge in unconstrained real-world indoor environments. Although many models for map-building and planning exist, it is difficult to integrate them due to the high amount of noise, dynamics, and complexity. Addressing this challenge, this paper describes a neural model for environment mapping and robot navigation based on learning spatial knowledge. Considering that a person typically moves within a room without colliding with objects, this model learns the spatial knowledge by observing the person's movement using a ceiling-mounted camera. A robot can plan and navigate to any given position in the room based on the acquired map, and adapt it based on having identified possible obstacles. In addition, salient visual features are learned and stored in the map during navigation. This anchoring of visual features in the map enables the robot to find and navigate to a target object by showing an image of it. We implement this model on a humanoid robot and tests are conducted in a home-like environment. Results of our experiments show that the learned sensorimotor map masters complex navigation tasks. © 2013 Yan, Weber and Wermter.

Cite

CITATION STYLE

APA

Yan, W., Weber, C., & Wermter, S. (2013). Learning indoor robot navigation using visual and sensorimotor map information. Frontiers in Neurorobotics, 7(OCT). https://doi.org/10.3389/fnbot.2013.00015

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free