Cooperative Multi-Robot Hierarchical Reinforcement Learning

9Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Recent advances in multi-robot deep reinforcement learning have made it possible to perform efficient exploration in problem space, but it remains a significant challenge in many complex domains. To alleviate this problem, a hierarchical approach has been designed in which agents can operate at many levels to complete tasks more efficiently. This paper proposes a novel technique called Multi-Agent Hierarchical Deep Deterministic Policy Gradient that combines the benefits of multiple robot systems with the hierarchical system used in Deep Reinforcement Learning. Here, agents acquire the ability to decompose a problem into simpler subproblems with varying time scales. Furthermore, this study develops a framework to formulate tasks into multiple levels. The upper levels function to learn policies for defining lower levels’ subgoals, whereas the lowest level depicts robot’s learning policies for primitive actions in the real environment. The proposed method is implemented and validated in a modified Multiple Particle Environment (MPE) scenario.

Cite

CITATION STYLE

APA

Setyawan, G. E., Hartono, P., & Sawada, H. (2022). Cooperative Multi-Robot Hierarchical Reinforcement Learning. International Journal of Advanced Computer Science and Applications, 13(9), 35–44. https://doi.org/10.14569/IJACSA.2022.0130904

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free