Learning to Cooperate in Decentralized Multi-robot Exploration of Dynamic Environments

12Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an approach to train a decentralized multi-robot system to learn cooperation strategy in the exploration of dynamic environments. The traditional approaches to multi-robot exploration problem are all based on the “pre-designed” cooperation strategy. However, many real-world settings are too complex for humans to “design” effective strategies. Besides, “pre-designed” strategy does not possess the ability to adapt to different task environment features, which also limits its application in real-world practices. Inspired by the superiority of deep reinforcement learning technique on complex individual behavior design, we apply the same technology to the cooperative learning process on the robot collective level. Our approach has been evaluated in a simulated multi-robot Disaster Exploration scenario and the results show that it could be applied in more complicated scenarios in contrast with two traditional “human-designed” methods.

Cite

CITATION STYLE

APA

Geng, M., Zhou, X., Ding, B., Wang, H., & Zhang, L. (2018). Learning to Cooperate in Decentralized Multi-robot Exploration of Dynamic Environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11307 LNCS, pp. 40–51). Springer Verlag. https://doi.org/10.1007/978-3-030-04239-4_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free