Anticipatory behavior of software agents in self-organizing negotiations

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Software agents are a well-established approach for modeling autonomous entities in distributed artificial intelligence. Iterated negotiations allow for coordinating the activities of multiple autonomous agents by means of repeated interactions. However, if several agents interact concurrently, the participants' activities can mutually influence each other. This leads to poor coordination results. In this paper, we discuss these interrelations and propose a self-organization approach to cope with that problem. To that end, we apply distributed reinforcement learning as a feedback mechanism to the agents' decision-making process. This enables the agents to use their experiences from previous activities to anticipate the results of potential future actions. They mutually adapt their behaviors to each other which results in the emergence of social order within the multiagent system. We empirically evaluate the dynamics of that process in a multiagent resource allocation scenario. The results show that the agents successfully anticipate the reactions to their activities in that dynamic and partially observable negotiation environment. This enables them to maximize their payoffs and to drastically outperform non-anticipating agents.

Cite

CITATION STYLE

APA

Berndt, J. O., & Herzog, O. (2015). Anticipatory behavior of software agents in self-organizing negotiations. In Anticipation Across Disciplines (Vol. 29, pp. 231–253). Springer International Publishing. https://doi.org/10.1007/978-3-319-22599-9_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free