Skip to main content

Distributed Reinforcement Learning Based Optimal Controller for Mobile Robot Formation

Citations of this article
Mendeley users who have this article in their library.
Get full text


This paper addresses a problem of attaining desired geometric formation for a group of homogeneous robots using distributed reinforcement learning. The challenges for learning by experience requires huge time and data samples. In multi-agent system (MAS), individual learning becomes more complex as it has to cooperate with its neighboring agent. In this work, a group of homogeneous robots models a single controller while performing a task in a decentralized manner. The framework uses an actor-critic architecture for local learning and its update law is identified using Lyapunov stability analysis. However, a global single controller is achieved by using average consensus protocol. Simulation as well as the experimental results have been given to demonstrate the proposed algorithm.




Shinde, C., Das, K., Kumar, S., & Behera, L. (2018). Distributed Reinforcement Learning Based Optimal Controller for Mobile Robot Formation. In 2018 European Control Conference, ECC 2018 (pp. 2800–2805). Institute of Electrical and Electronics Engineers Inc.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free