Deep Q-learning with prioritized sampling

19Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The combination of modern reinforcement learning and deep learning approaches brings significant breakthroughs to a variety of domains requiring both rich perception of high-dimensional sensory inputs and policy selection. A recent significant breakthrough in using deep neural networks as function approximators, termed Deep Q-Networks (DQN), proves to be very powerful for solving problems approaching real-world complexities such as Atari 2600 games. To remove temporal correlation between the observed transitions, DQN uses a sampling mechanism called experience reply which simply replays transitions at random from the memory buffer. However, such a mechanism does not exploit the importance of transitions in the memory buffer. In this paper, we use prioritized sampling into DQN as an alternative. Our experimental results demonstrate that DQN with prioritized sampling achieves a better performance, in terms of both average score and learning rate on four Atari 2600 games.

Cite

CITATION STYLE

APA

Zhai, J., Liu, Q., Zhang, Z., Zhong, S., Zhu, H., Zhang, P., & Sun, C. (2016). Deep Q-learning with prioritized sampling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9947 LNCS, pp. 13–22). Springer Verlag. https://doi.org/10.1007/978-3-319-46687-3_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free