The evolution of inefficiency in a simulated stag hunt

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We used genetic algorithms to evolve populations of reinforcement learning (Q-learning) agents to play a repeated two-player symmetric coordination game under different risk conditions and found that evolution steered our simulated populations to the Pareto inefficient equilibrium under high-risk conditions and to the Pareto efficient equilibrium under low-risk conditions. Greater degrees of forgiveness and temporal discounting of future returns emerged in populations playing the low-risk game. Results demonstrate the utility of simulation to evolutionary psychology.

Cite

CITATION STYLE

APA

Bearden, J. N. (2001). The evolution of inefficiency in a simulated stag hunt. Behavior Research Methods, Instruments, and Computers, 33(2), 124–129. https://doi.org/10.3758/BF03195357

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free