Path Planning Method of Mobile Robot Using Improved Deep Reinforcement Learning

16Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A mobile robot path planning method based on improved deep reinforcement learning is proposed. First, in order to conform to the actual kinematics model of the robot, the continuous environmental state space and discrete action state space are designed. In addition, an improved deep Q-network (DQN) method is proposed, which takes the directly collected information as the training samples and combines the environmental state characteristics of the robot and the target point to be reached as the input of the network. DQN method takes the Q value at the current position as the output of the network model and uses ϵ-greedy strategy for action selection. Finally, the reward function combined with the artificial potential field method is designed to optimize the state-action space. The reward function solves the problem of sparse reward in the environmental state space and makes the action selection of the robot more accurate. Experiments show that compared with the classical DQN method, the average loss function value is reduced by 36.87% and the average reward value is increased by 12.96%, which can effectively improve the working efficiency of mobile robot.

Cite

CITATION STYLE

APA

Wang, W., Wu, Z., Luo, H., & Zhang, B. (2022). Path Planning Method of Mobile Robot Using Improved Deep Reinforcement Learning. Journal of Electrical and Computer Engineering, 2022. https://doi.org/10.1155/2022/5433988

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free