Evolutionary multi-agent systems

22Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In Multi-Agent learning, agents must learn to select actions that maximize their utility given the action choices of the other agents. Cooperative Coevolution offers a way to evolve multiple elements that together form a whole, by using a separate population for each element. We apply this setup to the problem of multi-agent learning, arriving at an evolutionary multi-agent system (EA-MAS). We study a problem that requires agents to select their actions in parallel, and investigate the problem solving capacity of the EA-MAS for a wide range of settings. Secondly, we investigate the transfer of the Collective INtelligence (COIN) framework to the EA-MAS. COIN is a proved engineering approach for learning of cooperative tasks in MASs, and consists of reengineering the utilities of the agents so as to contribute to the global utility. It is found that, as in the Reinforcement Learning case, the use of the Wonderful Life Utility specified by COIN also leads to improved results for the EA-MAS. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

’T Hoen, P. J., & De Jong, E. D. (2004). Evolutionary multi-agent systems. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3242, 872–881. https://doi.org/10.1007/978-3-540-30217-9_88

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free