FMR-GA – A cooperative multi-agent reinforcement learning algorithm based on gradient ascent

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Gradient ascent methods combined with Multi-Agent Reinforcement Learning (MARL) have been studied for years as a potential direction to design new MARL algorithms. This paper proposes a gradient-based MARL algorithm – Frequency of the Maximal Reward based on Gradient Ascent (FMR-GA). The aim is to reach the maximal total reward in repeated games. To achieve this goal and simplify the stability analysis procedure, we have made effort in two aspects. Firstly, the probability of getting the maximal total reward is selected as the objective function, which simplifies the expression of the gradient and facilitates reaching the learning goal. Secondly, a factor is designed and is added to the gradient. This will produce the desired stable critical points corresponding to the optimal joint strategy. We propose a MARL algorithm called Probability of Maximal Reward based on Infinitsmall Gradient Ascent (PMR-IGA), and analyze its convergence in two-player two-action and two-player three-action repeated games. Then we derive a practical MARL algorithm FMR-GA from PMR-IGA. Theoretical and simulation results show that FMR-GA will converge to the optimal strategy in the cases presented in this paper.

Cite

CITATION STYLE

APA

Zhang, Z., Wang, D., Zhao, D., & Song, T. (2017). FMR-GA – A cooperative multi-agent reinforcement learning algorithm based on gradient ascent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10634 LNCS, pp. 840–848). Springer Verlag. https://doi.org/10.1007/978-3-319-70087-8_86

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free