Distributed learning of best response behaviors in concurrent iterated many-object negotiations

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Iterated negotiations are a well-established method for coordinating distributed activities in multiagent systems. However, if several of these take place concurrently, the participants' activities can mutually influence each other. In order to cope with the problem of interrelated interaction outcomes in partially observable environments, we apply distributed reinforcement learning to concurrent many-object negotiations. To this end, we discuss iterated negotiations from the perspective of repeated games, specify the agents' learning behavior, and introduce decentral decision-making criteria for terminating a negotiation. Furthermore, we empirically evaluate the approach in a multiagent resource allocation scenario. The results show that our method enables the agents to successfully learn mutual best response behaviors which approximate Nash equilibrium allocations. Additionally, the learning constrains the required interaction effort for attaining these results. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Berndt, J. O., & Herzog, O. (2012). Distributed learning of best response behaviors in concurrent iterated many-object negotiations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7598 LNAI, pp. 15–29). https://doi.org/10.1007/978-3-642-33690-4_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free