An Enhanced Model-Free Reinforcement Learning Algorithm to Solve Nash Equilibrium for Multi-Agent Cooperative Game Systems

6Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Solving the Nash equilibrium is important for multi-agent game systems, and the speed of reaching Nash equilibrium is critical for the agent to quickly make real-time decisions. A typical scheme is the model-free reinforcement learning algorithm based on policy iteration, which is slow because each iteration will be calculated from the start state to the end state. In this paper, we propose a faster scheme based on value iteration, using Q-function in an online manner to solve the Nash equilibrium of the system. Since the calculation is based on the value from the last iteration, the convergence speed of the proposed scheme is much faster than the policy iteration. The rationality and convergence of this scheme are analyzed and proved theoretically. An actor-critic network structure is used to implement this scheme through simulation. The simulation results show that the convergence speed of our proposed scheme is about 10 times faster than that of the policy iteration algorithm.

Cite

CITATION STYLE

APA

Jiang, Y., & Tan, F. (2020). An Enhanced Model-Free Reinforcement Learning Algorithm to Solve Nash Equilibrium for Multi-Agent Cooperative Game Systems. IEEE Access, 8, 223743–223755. https://doi.org/10.1109/ACCESS.2020.3043806

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free