Generative exploration and exploitation

4Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

Sparse reward is one of the biggest challenges in reinforcement learning (RL). In this paper, we propose a novel method called Generative Exploration and Exploitation (GENE) to overcome sparse reward. GENE automatically generates start states to encourage the agent to explore the environment and to exploit received reward signals. GENE can adaptively tradeoff between exploration and exploitation according to the varying distributions of states experienced by the agent as the learning progresses. GENE relies on no prior knowledge about the environment and can be combined with any RL algorithm, no matter on-policy or off-policy, single-agent or multi-agent. Empirically, we demonstrate that GENE significantly outperforms existing methods in three tasks with only binary rewards, including Maze, Maze Ant, and Cooperative Navigation. Ablation studies verify the emergence of progressive exploration and automatic reversing.

Cite

CITATION STYLE

APA

Jiang, J., & Lu, Z. (2020). Generative exploration and exploitation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 4337–4344). AAAI press. https://doi.org/10.1609/aaai.v34i04.5858

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free