Abstract
In this study, Q-learning has been extended to multiagent systems where a kind of ranking in action selection has been set among several self-interested agents. The process of learning is regarded as a sequence of situations modeled as extensive form games with perfect information. Each agent decides on its actions, in different subgames the higher level agents have decided on, based on its preferences affected by the lower level agents' preferences. These modified Q-values, called associative Q-values, are the estimations of possible utilities gained over a subgame with respect to the lower level agents'game preferences. A kind of social convention can be addressed in extensive form games providing the ability to better deal with multiplicity in equilibrium points as well as decreasing complexity of computations with respect to normal form games. This new process is called extensive Markov game which is proved to be a kind of generalized Markov decision process. It is also provided a comprehensive review on the related concepts and definitions previously developed for normal form games. Some analytical discussions on the convergence and the computation space are also included. A numerical example affords more elaboration on the proposed method. © 2009 Asian Network for Scientific Information.
Author supplied keywords
Cite
CITATION STYLE
Akramizadeh, A., Afshar, A., & Menhaj, M. B. (2009). Multiagent reniforcement learning in extensive form games with perfect information. Journal of Applied Sciences, 9(11), 2056–2066. https://doi.org/10.3923/jas.2009.2056.2066
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.