SparseMAAC: Sparse Attention for Multi-agent Reinforcement Learning

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In multi-agent scenario, each agent needs to aware other agents’ information as well as the environment to improve the performance of reinforcement learning methods. However, as the increasing of the agent number, this procedure becomes significantly complicated and ambitious in order to prominently improve efficiency. We introduce the sparse attention mechanism into multi-agent reinforcement learning framework and propose a novel Multi-Agent Sparse Attention Actor Critic (SparseMAAC) algorithm. Our algorithm framework enables the ability to efficiently select and focus on those critical impact agents in early training stages, while eliminates data noise simultaneously. The experimental results show that the proposed SparseMAAC algorithm not only exceeds those baseline algorithms in the reward performance, but also is superior to them significantly in the convergence speed.

Cite

CITATION STYLE

APA

Li, W., Jin, B., & Wang, X. (2019). SparseMAAC: Sparse Attention for Multi-agent Reinforcement Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11448 LNCS, pp. 96–110). Springer Verlag. https://doi.org/10.1007/978-3-030-18590-9_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free