Multi-agent neural reinforcement-learning system with communication

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning models have as of late risen as popular function approximators for single-agent reinforcement learning challenges, by accurately estimating the value function of complex environments and being able to generalize to new unseen states. For multi-agent fields, agents must cope with the non-stationarity of the environment, due to the presence of other agents, and can take advantage of information sharing techniques for improved coordination. We propose an neural-based actor-critic algorithm, which learns communication protocols between agents and implicitly shares information during the learning phase. Large numbers of agents communicate with a self-learned protocol during distributed execution, and reliably learn complex strategies and protocols for partially observable multi-agent environments.

Cite

CITATION STYLE

APA

Simões, D., Lau, N., & Reis, L. P. (2019). Multi-agent neural reinforcement-learning system with communication. In Advances in Intelligent Systems and Computing (Vol. 931, pp. 3–12). Springer Verlag. https://doi.org/10.1007/978-3-030-16184-2_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free