Skip to main content

Distributed Reinforcement Learning Based Optimal Controller for Mobile Robot Formation

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses a problem of attaining desired geometric formation for a group of homogeneous robots using distributed reinforcement learning. The challenges for learning by experience requires huge time and data samples. In multi-agent system (MAS), individual learning becomes more complex as it has to cooperate with its neighboring agent. In this work, a group of homogeneous robots models a single controller while performing a task in a decentralized manner. The framework uses an actor-critic architecture for local learning and its update law is identified using Lyapunov stability analysis. However, a global single controller is achieved by using average consensus protocol. Simulation as well as the experimental results have been given to demonstrate the proposed algorithm.

Cite

CITATION STYLE

APA

Shinde, C., Das, K., Kumar, S., & Behera, L. (2018). Distributed Reinforcement Learning Based Optimal Controller for Mobile Robot Formation. In 2018 European Control Conference, ECC 2018 (pp. 2800–2805). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.23919/ECC.2018.8550590

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free