In reinforcement learning, the epsilon (ε)-greedy strategy is commonly employed as an exploration technique This method, however, leads to extensive initial exploration and prolonged learning periods. Existing approaches to mitigate this issue involve constraining the exploration range using expert data or utilizing pretrained models. Nevertheless, these methods do not effectively reduce the initial exploration range, as the exploration by the agent is limited to states adjacent to those included in the expert data. This paper proposes a method to reduce the initial exploration range in reinforcement learning through a pretrained transformer decoder on expert data. The proposed method involves pretraining a transformer decoder with massive expert data to guide the agent’s actions during the early learning stages. After achieving a certain learning threshold, the actions are determined using the epsilon-greedy strategy. An experiment was conducted in the basketball game FreeStyle1 to compare the proposed method with the traditional Deep Q-Network (DQN) using the epsilon-greedy strategy. The results indicated that the proposed method yielded approximately 2.5 times the average reward and a 26% higher win rate, proving its enhanced performance in reducing exploration range and optimizing learning times. This innovative method presents a significant improvement over traditional exploration techniques in reinforcement learning.
CITATION STYLE
Kyoung, D., & Sung, Y. (2023). Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning. Sensors, 23(17). https://doi.org/10.3390/s23177411
Mendeley helps you to discover research relevant for your work.