QSOD: Hybrid Policy Gradient for Deep Multi-agent Reinforcement Learning

8Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When individuals interact with one another to accomplish specific goals, they learn from others' experiences to achieve the tasks at hand. The same holds for learning in virtual environments, such as video games. Deep multiagent reinforcement learning shows promising results in terms of completing many challenging tasks. To demonstrate its viability, most algorithms use value decomposition for multiple agents. To guide each agent, behavior value decomposition is utilized to decompose the combined Q-value of the agents into individual agent Q-values. A different mixing method can be utilized, using a monotonicity assumption based on value decomposition algorithms such as QMIX and QVMix. However, this method selects individual agent actions through a greedy policy. The agents, which require large numbers of training trials, are not addressed. In this paper, we propose a novel hybrid policy for the action selection of an individual agent known as Q-value Selection using Optimization and DRL (QSOD). A grey wolf optimizer (GWO) is used to determine the choice of individuals' actions. As in GWO, there is proper attention among the agents facilitated through the agents' coordination with one another. We used the StarCraft 2 Learning Environment to compare our proposed algorithm with the state-of-the-art algorithms QMIX and QVMix. Experimental results demonstrate that our algorithm outperforms QMIX and QVMix in all scenarios and requires fewer training trials.

Cite

CITATION STYLE

APA

Rehman, H. M. R. U., On, B. W., Ningombam, D. D., Yi, S., & Choi, G. S. (2021). QSOD: Hybrid Policy Gradient for Deep Multi-agent Reinforcement Learning. IEEE Access, 9, 129728–129741. https://doi.org/10.1109/ACCESS.2021.3113350

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free