Exploration bonuses based on upper confidence bounds for sparse reward games

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent deep reinforcement learning (RL) algorithms have achieved super-human-level performance in many Atari games. However, a closer look at their performance reveals that the algorithms fall short of humans in games where rewards are only obtained occasionally. One solution to this sparse reward problem is to incorporate an explicit and more sophisticated exploration strategy in the agent’s learning process. In this paper, we present an effective exploration strategy that explicitly considers the progress of training using exploration bonuses based on Upper Confidence Bounds (UCB). Our method also includes a mechanism to separate exploration bonuses from rewards, thereby avoiding the problem of interfering with the original learning objective. We evaluate our method on Atari 2600 games with sparse rewards, and achieve significant improvements over the vanilla asynchronous advantage actor-critic (A3C) algorithm.

Cite

CITATION STYLE

APA

Mizukami, N., Suzuki, J., Kameko, H., & Tsuruoka, Y. (2017). Exploration bonuses based on upper confidence bounds for sparse reward games. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10664 LNCS, pp. 165–175). Springer Verlag. https://doi.org/10.1007/978-3-319-71649-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free