What’s in a Game? The Effect of Game Complexity on Deep Reinforcement Learning

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Reinforcement Learning (DRL) combines deep neural networks with reinforcement learning. These methods, unlike their predecessors, learn end-to-end by extracting high-dimensional representations from raw sensory data to directly predict the actions. DRL methods were shown to master most of the ATARI games, beating humans in a good number of them, using the same algorithm, network architecture and hyper-parameters. However, why DRL works on some games better than others has not been fully investigated. In this paper, we propose that the complexity of each game is defined by a number of factors (the size of the search space, existence/absence of enemies, existence/absence of intermediate reward, and so on) and we posit that how fast and well a game is learned by DRL depends on these factors. Towards this aim, we use simplified Maze and Pacman environments and we conduct experiments to see the effect of such factors on the convergence of DRL. Our results provide a first step in a better understanding of how DRL works and as such will be informative in the future in determining scenarios where DRL can be applied effectively e.g., outside of games.

Cite

CITATION STYLE

APA

Emekligil, E., & Alpaydın, E. (2019). What’s in a Game? The Effect of Game Complexity on Deep Reinforcement Learning. In Communications in Computer and Information Science (Vol. 1017, pp. 147–163). Springer Verlag. https://doi.org/10.1007/978-3-030-24337-1_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free