Learning to coordinate using commitment sequences in cooperative multi-agent systems

3Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We report on an investigation of the learning of coordination in cooperative multi-agent systems. Specifically, we study solutions that are applicable to independent agents i.e. agents that do not observe one another's actions. In previous research [5] we have presented a reinforcement learning approach that converges to the optimal joint action even in scenarios with high miscoordination costs. However, this approach failed in fully stochastic environments. In this paper, we present a novel approach based on reward estimation with a shared action-selection protocol. The new technique is applicable in fully stochastic environments where mutual observation of actions is not possible. We demonstrate empirically that our approach causes the agents to converge almost always to the optimal joint action even in difficult stochastic scenarios with high miscoordination penalties. © 2005 Springer-Verlag.

Cite

CITATION STYLE

APA

Kapetanakis, S., Kudenko, D., & Strens, M. J. A. (2005). Learning to coordinate using commitment sequences in cooperative multi-agent systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3394 LNAI, pp. 106–118). https://doi.org/10.1007/978-3-540-32274-0_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free