Multi-agent reinforcement learning with approximate model learning for competitive games

19Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.

Abstract

We propose a method for learning multi-agent policies to compete against multiple opponents. The method consists of recurrent neural network-based actor-critic networks and deterministic policy gradients that promote cooperation between agents by communication. The learning process does not require access to opponents' parameters or observations because the agents are trained separately from the opponents. The actor networks enable the agents to communicate using forward and backward paths while the critic network helps to train the actors by delivering them gradient signals based on their contribution to the global reward. Moreover, to address nonstationarity due to the evolving of other agents, we propose approximate model learning using auxiliary prediction networks for modeling the state transitions, reward function, and opponent behavior. In the test phase, we use competitive multi-agent environments to demonstrate by comparison the usefulness and superiority of the proposed method in terms of learning efficiency and goal achievements. The comparison results show that the proposed method outperforms the alternatives.

Cite

CITATION STYLE

APA

Park, Y. J., Cho, Y. S., & Kim, S. B. (2019). Multi-agent reinforcement learning with approximate model learning for competitive games. PLoS ONE, 14(9). https://doi.org/10.1371/journal.pone.0222215

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free