We consider the problem of multi agents cooperating in a partially-observable environment. Agents must learn to coordinate and share relevant information to solve the tasks successfully. This article describes Asynchronous Advantage Actor-Critic with Communication (A3C2), an end-to-end differentiable approach where agents learn policies and communication protocols simultaneously. A3C2 uses a centralized learning, distributed execution paradigm, supports independent agents, dynamic team sizes, partially-observable environments, and noisy communications. We compare and show that A3C2 outperforms other state-of-the-art proposals in multiple environments.
CITATION STYLE
Simões, D., Lau, N., & Reis, L. P. (2020). Multi Agent Deep Learning with Cooperative Communication. Journal of Artificial Intelligence and Soft Computing Research, 10(3), 189–207. https://doi.org/10.2478/jaiscr-2020-0013
Mendeley helps you to discover research relevant for your work.