Communication-robust multi-agent learning by adaptable auxiliary multi-agent adversary generation

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Communication can promote coordination in cooperative Multi-Agent Reinforcement Learning (MARL). Nowadays, existing works mainly focus on improving the communication efficiency of agents, neglecting that real-world communication is much more challenging as there may exist noise or potential attackers. Thus the robustness of the communication-based policies becomes an emergent and severe issue that needs more exploration. In this paper, we posit that the ego system1) trained with auxiliary adversaries may handle this limitation and propose an adaptable method of Multi-Agent Auxiliary Adversaries Generation for robust Communication, dubbed MA3C, to obtain a robust communication-based policy. In specific, we introduce a novel message-attacking approach that models the learning of the auxiliary attacker as a cooperative problem under a shared goal to minimize the coordination ability of the ego system, with which every information channel may suffer from distinct message attacks. Furthermore, as naive adversarial training may impede the generalization ability of the ego system, we design an attacker population generation approach based on evolutionary learning. Finally, the ego system is paired with an attacker population and then alternatively trained against the continuously evolving attackers to improve its robustness, meaning that both the ego system and the attackers are adaptable. Extensive experiments on multiple benchmarks indicate that our proposed MA3C provides comparable or better robustness and generalization ability than other baselines.

References Powered by Scopus

Human-level control through deep reinforcement learning

22538Citations
N/AReaders
Get full text

Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms

683Citations
N/AReaders
Get full text

Human-level performance in 3D multiplayer games with population-based reinforcement learning

497Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Provable space discretization based evolutionary search for scalable multi-objective security games

1Citations
N/AReaders
Get full text

Emergent language: a survey and taxonomy

0Citations
N/AReaders
Get full text

The future of cognitive strategy-enhanced persuasive dialogue agents: new perspectives and trends

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Yuan, L., Chen, F., Zhang, Z., & Yu, Y. (2024). Communication-robust multi-agent learning by adaptable auxiliary multi-agent adversary generation. Frontiers of Computer Science, 18(6). https://doi.org/10.1007/s11704-023-2733-5

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

60%

Professor / Associate Prof. 1

20%

Researcher 1

20%

Readers' Discipline

Tooltip

Computer Science 2

40%

Agricultural and Biological Sciences 1

20%

Mathematics 1

20%

Engineering 1

20%

Article Metrics

Tooltip
Mentions
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free