Wide and Deep Reinforcement Learning Extended for Grid-Based Action Games

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For the last decade, Deep Reinforcement Learning (DRL) has undergone very rapid development. However, less has been done to integrate linear methods into it. Our research aims at a simple and practical Wide and Deep Reinforcement Learning framework to extend DRL algorithms by combining linear (wide) and non-linear (deep) methods. This framework can help to integrate expert knowledge or to fuse sensor information while at the same time improving the performance of existing DRL algorithms. To test this framework we have developed an extension of the popular Deep Q-Networks Algorithm, which we call Wide Deep Q-Networks. We analyze its performance compared to Deep Q-Networks and Linear Agents, as well as human agents by applying our new algorithm to Berkeley’s Pac-Man environment. Our algorithm considerably outperforms Deep Q-Networks both in terms of learning speed and ultimate performance, showing its potential for boosting existing algorithms. Furthermore, it is robust to the failure of one of its components.

Cite

CITATION STYLE

APA

Montoya, J. M., Doell, C., & Borgelt, C. (2019). Wide and Deep Reinforcement Learning Extended for Grid-Based Action Games. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11978 LNAI, pp. 224–245). Springer. https://doi.org/10.1007/978-3-030-37494-5_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free