Learning and innovative elements of strategy adoption rules expand cooperative network topologies

64Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q learning on a variety of random, regular, small-world, scale -free and modular network models in repeated, multi-agent Prisoner's Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing comlex systems. © 2008 Wang et al.

Cite

CITATION STYLE

APA

Wang, S., Szalay, M. S., Zhang, C., & Csermely, P. (2008). Learning and innovative elements of strategy adoption rules expand cooperative network topologies. PLoS ONE, 3(4). https://doi.org/10.1371/journal.pone.0001917

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free