Cooperation between multiple agents based on partially sharing policy

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In human society, learning is essential to intelligent behavior. However, people do not need to learn everything from scratch by their own discovery. Instead, they exchange information and knowledge with one another and learn from their peers and teachers. When a task is too complex for an individual to handle, one may cooperate with its partners in order to accomplish it. Like human society, cooperation exists in the other species, such as ants that are known to communicate about the locations of food and move it cooperatively. Using the experience and knowledge of other agents, a learning agent may learn faster, make fewer mistakes, and create rules for unstructured situations. In the proposed learning algorithm, an agent adapts to comply with its peers by learning carefully when it obtains a positive reinforcement feedback signal, but should learn more aggressively if a negative reward follows the action just taken. These two properties are applied to develop the proposed cooperative learning method conceptually. The algorithm is implemented in some cooperative tasks and demonstrates that agents can learn to accomplish a task together efficiently through a repetitive trials. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Hwang, K. S., Lin, C. J., Wu, C. J., & Lo, C. Y. (2007). Cooperation between multiple agents based on partially sharing policy. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4681 LNCS, pp. 422–432). Springer Verlag. https://doi.org/10.1007/978-3-540-74171-8_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free