Cooperative hunting has long received considerable attention because it may be an evolutionary origin of cooperation and even our sociality. It has been known that the level of organization of this predation varies among species. Although predator-prey interactions have been studied in multi-agent reinforcement learning domains, there have been few attempts to use the simulations to better understand human and other animal behaviors. In this study, we introduce a predator-prey simulation environment based on multi-agent deep reinforcement learning that can bridge the gap between biological/ecological and artificial intelligence domains. Using this environment, we revealed that organized cooperative hunting patterns with role division among individuals, which is positioned as the highest level of organization in cooperative hunting of animals in nature, can emerge via a simplest form of multi-agent deep reinforcement learning. Our results suggest that sophisticated collaborative patterns, which have often been thought to require high cognition, can be realized from relatively simple cognitive and learning mechanisms and that the close link between the behavioral patterns of agents and animals acquired through interaction with their environments.
CITATION STYLE
Tsutsui, K., Takeda, K., & Fujii, K. (2023). Emergence of Collaborative Hunting via Multi-Agent Deep Reinforcement Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13643 LNCS, pp. 210–224). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-37660-3_15
Mendeley helps you to discover research relevant for your work.