Agents' cooperation based on long-term reciprocal altruism

6Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cooperation among agents is critical for agents' Artificial Intelligence (AI). In multi-agent system (MAS), agents cooperate with each other for long-term return and build such partnership in most of the time. However, the partnership could be broken easily if one agent did not or refused to grant a favor to another. Will it be helpful to MAS or individual agent, if agent has controllable level of tolerance? That is the main question of this paper. In order to find an answer, we propose a cooperative strategy, "flexible reciprocal altruism model (FRAM)". In FRAM, agent has a controllable rate of tolerance and is willing to grant favors for long-term return. Agent can determine whether to grant a favor to another based on their past interactions. As a result, granting unmatched favors by accident will not break the relationship between two agents immediately. Experiments show that our strategy performs well with different cost/value tradeoffs, numbers of agents, and load. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Zhao, X., Xia, H., Yu, H., & Tian, L. (2012). Agents’ cooperation based on long-term reciprocal altruism. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7345 LNAI, pp. 689–698). https://doi.org/10.1007/978-3-642-31087-4_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free