Mixture of expert used to learn game play

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we study an emergence of game strategy in multiagent systems. Symbolic and subsymbolic approaches are compared. Symbolic approach is represented by a backtrack algorithm with specified search depth, whereas the subsymbolic approach is represented by feed-forward neural networks that are adapted by reinforcement temporal difference TD(λ) technique. We study standard feed-forward networks and mixture of adaptive experts networks. As a test game, we used the game of simplified checkers. It is demonstrated that both networks are capable of game strategy emergence. © Springer-Verlag Berlin Heidelberg 2008.

Cite

CITATION STYLE

APA

Lacko, P., & Kvasnička, V. (2008). Mixture of expert used to learn game play. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5163 LNCS, pp. 225–234). https://doi.org/10.1007/978-3-540-87536-9_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free