Exploration entropy for reinforcement learning

12Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The training process analysis and termination condition of the training process of a Reinforcement Learning (RL) system have always been the key issues to train an RL agent. In this paper, a new approach based on State Entropy and Exploration Entropy is proposed to analyse the training process. The concept of State Entropy is used to denote the uncertainty for an RL agent to select the action at every state that the agent will traverse, while the Exploration Entropy denotes the action selection uncertainty of the whole system. Actually, the action selection uncertainty of a certain state or the whole system reflects the degree of exploration and the stage of the learning process for an agent. The Exploration Entropy is a new criterion to analyse and manage the training process of RL. The theoretical analysis and experiment results illustrate that the curve of Exploration Entropy contains more information than the existing analytical methods.

Cite

CITATION STYLE

APA

Xin, B., Yu, H., Qin, Y., Tang, Q., & Zhu, Z. (2020). Exploration entropy for reinforcement learning. Mathematical Problems in Engineering, 2020. https://doi.org/10.1155/2020/2672537

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free