DM-DQN: Dueling Munchausen deep Q network for robot path planning

23Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

In order to achieve collision-free path planning in complex environment, Munchausen deep Q-learning network (M-DQN) is applied to mobile robot to learn the best decision. On the basis of Soft-DQN, M-DQN adds the scaled log-policy to the immediate reward. The method allows agent to do more exploration. However, the M-DQN algorithm has the problem of slow convergence. A new and improved M-DQN algorithm (DM-DQN) is proposed in the paper to address the problem. First, its network structure was improved on the basis of M-DQN by decomposing the network structure into a value function and an advantage function, thus decoupling action selection and action evaluation and speeding up its convergence, giving it better generalization performance and enabling it to learn the best decision faster. Second, to address the problem of the robot’s trajectory being too close to the edge of the obstacle, a method of using an artificial potential field to set a reward function is proposed to drive the robot’s trajectory away from the vicinity of the obstacle. The result of simulation experiment shows that the method learns more efficiently and converges faster than DQN, Dueling DQN and M-DQN in both static and dynamic environments, and is able to plan collision-free paths away from obstacles.

Cite

CITATION STYLE

APA

Gu, Y., Zhu, Z., Lv, J., Shi, L., Hou, Z., & Xu, S. (2023). DM-DQN: Dueling Munchausen deep Q network for robot path planning. Complex and Intelligent Systems, 9(4), 4287–4300. https://doi.org/10.1007/s40747-022-00948-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free