Gradient based method for symmetric and asymmetric multiagent reinforcement learning

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A gradient based method for both symmetric and asymmetric multiagent reinforcement learning is introduced in this paper. Symmetric multiagent reinforcement learning addresses the problem with agents involved in the learning task having equal information states. Respectively, in asymmetric multiagent reinforcement learning, the information states are not equal, i.e. some agents (leaders) try to encourage agents with less information (followers) to select actions that lead to improved overall utility value for the leaders. In both cases, there is a huge number of parameters to learn and we thus need to use some parametric function approximation methods to represent the value functions of the agents. The method proposed in this paper is based on the VAPS framework that is extended to utilize the theory of Markov games, i.e. a natural basis of multiagent reinforcement learning. © Springer-Verlag 2003.

Cite

CITATION STYLE

APA

Könönen, V. (2004). Gradient based method for symmetric and asymmetric multiagent reinforcement learning. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2690, 68–75. https://doi.org/10.1007/978-3-540-45080-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free