On a dynamical analysis of reinforcement learning in games: Emergence of occam's razor

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modeling learning agents in the context of Multi-agent Systems requires an adequate understanding of their dynamic behaviour. Usually, these agents are modeled similar to the different players in a standard game theoretical model. Unfortunately traditional Game Theory is static and limited in its usefelness. Evolutionary Game Theory improves on this by providing a dynamics which describes how strategies evolve over time. In this paper, we discuss three learning models whose dynamics are related to the Replicator Dynamics(RD). We show how a classical Reinforcement Learning(RL) technique, i.e. Q-learning relates to the RD. This allows to better understand the learning process and it allows to determine how complex a RL model should be. More precisely, Occam's Razor applies in the framework of games, i.e. the simplest model (Cross) suffices for learning equilibria. An experimental verification in all three models is presented.

Cite

CITATION STYLE

APA

Tuyls, K., Verbeeck, K., & Maes, S. (2003). On a dynamical analysis of reinforcement learning in games: Emergence of occam’s razor. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2691, pp. 335–344). Springer Verlag. https://doi.org/10.1007/3-540-45023-8_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free