In this paper, we study an emergence of game strategy in multiagent systems. Symbolic and subsymbolic approaches are compared. Symbolic approach is represented by a backtrack algorithm with specified search depth, whereas the subsymbolic approach is represented by feed-forward neural networks that are adapted by reinforcement temporal difference TD(λ) technique. We study standard feed-forward networks and mixture of adaptive experts networks. As a test game, we used the game of simplified checkers. It is demonstrated that both networks are capable of game strategy emergence. © Springer-Verlag Berlin Heidelberg 2008.
CITATION STYLE
Lacko, P., & Kvasnička, V. (2008). Mixture of expert used to learn game play. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5163 LNCS, pp. 225–234). https://doi.org/10.1007/978-3-540-87536-9_24
Mendeley helps you to discover research relevant for your work.