Deep learning models have as of late risen as popular function approximators for single-agent reinforcement learning challenges, by accurately estimating the value function of complex environments and being able to generalize to new unseen states. For multi-agent fields, agents must cope with the non-stationarity of the environment, due to the presence of other agents, and can take advantage of information sharing techniques for improved coordination. We propose an neural-based actor-critic algorithm, which learns communication protocols between agents and implicitly shares information during the learning phase. Large numbers of agents communicate with a self-learned protocol during distributed execution, and reliably learn complex strategies and protocols for partially observable multi-agent environments.
CITATION STYLE
Simões, D., Lau, N., & Reis, L. P. (2019). Multi-agent neural reinforcement-learning system with communication. In Advances in Intelligent Systems and Computing (Vol. 931, pp. 3–12). Springer Verlag. https://doi.org/10.1007/978-3-030-16184-2_1
Mendeley helps you to discover research relevant for your work.