Deep Reinforcement Learning in Strategic Board Game Environments

16Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we propose a novel Deep Reinforcement Learning (DRL) algorithm that uses the concept of “action-dependent state features”, and exploits it to approximate the Q-values locally, employing a deep neural network with parallel Long Short Term Memory (LSTM) components, each one responsible for computing an action-related Q-value. As such, all computations occur simultaneously, and there is no need to employ “target” networks and experience replay, which are techniques regularly used in the DRL literature. Moreover, our algorithm does not require previous training experiences, but trains itself online during game play. We tested our approach in the Settlers Of Catan multi-player strategic board game. Our results confirm the effectiveness of our approach, since it outperforms several competitors, including the state-of-the-art jSettler heuristic algorithm devised for this particular domain.

Cite

CITATION STYLE

APA

Xenou, K., Chalkiadakis, G., & Afantenos, S. (2019). Deep Reinforcement Learning in Strategic Board Game Environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11450 LNAI, pp. 233–248). Springer Verlag. https://doi.org/10.1007/978-3-030-14174-5_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free