Robot docking based on omnidirectional vision and reinforcement learning

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a system for visual robotic docking using an omnidirectional camera coupled with the actor critic reinforcement learning algorithm. The system enables a PeopleBot robot to locate and approach a table so that it can pick an object from it using the pan-tilt camera mounted on the robot. We use a staged approach to solve this problem as there are distinct sub tasks and different sensors used. Starting with random wandering of the robot until the table is located via a landmark, and then a network trained via reinforcement allows the robot to rum to and approach the table. Once at the table the robot is to pick the object from it. We argue that our approach has a lot of potential allowing the learning of robot control for navigation removing the need for internal maps of the environment. This is achieved by allowing the robot to learn couplings between motor actions and the position of a landmark. © 2006 Springer-Verlag London.

Cite

CITATION STYLE

APA

Muse, D., Weber, C., & Wermter, S. (2006). Robot docking based on omnidirectional vision and reinforcement learning. In Research and Development in Intelligent Systems XXII - Proceedings of AI 2005, the 25th SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence (pp. 23–36). Springer London. https://doi.org/10.1007/978-1-84628-226-3_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free