Combination of interaction models for multi-agents systems

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present an interaction technique for coordinating agents that use rewards generated by Reinforcement Learning algorithms. Agents that coordinate with each other by exchanging rewards need mechanisms to help them while they interact to discover action policies. Because of the peculiarities of the environment and the objectives of each agent, there is no guarantee that a coordination model can converge them to an optimal policy. One possibility is to take advantage of existing models so that a mechanism that is less sensitive to the system variables emerges. The technique described here is based on three models previously studied in which agents (i) share learning in a predefined cycle of interactions, (ii) cooperate at every interaction and (iii) cooperate when an agent reaches the goal-state. Traffic scenarios were generated as a way of validating the proposed technique. The results showed that even when the computational complexity was increased the gains in terms of convergence make the technique superior to classical Reinforcement Learning approaches.

Cite

CITATION STYLE

APA

Ribeiro, R., Guisi, D. M., Teixeira, M., Dosciatti, E. R., Borges, A. P., & Enembreck, F. (2017). Combination of interaction models for multi-agents systems. In Lecture Notes in Business Information Processing (Vol. 291, pp. 107–121). Springer Verlag. https://doi.org/10.1007/978-3-319-62386-3_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free