Predator-Prey Reward Based Q-Learning Coverage Path Planning for Mobile Robot

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Coverage Path Planning (CPP in short) is a basic problem for mobile robot when facing a variety of applications. $Q$ -Learning based coverage path planning algorithms are beginning to be explored recently. To overcome the problem of traditional $Q$ -Learning of easily falling into local optimum, in this paper, the new-type reward functions originating from Predator-Prey model are introduced into traditional $Q$ -Learning based CPP solution, which introduces a comprehensive reward function that incorporates three rewards including Predation Avoidance Reward Function, Smoothness Reward Function and Boundary Reward Function. In addition, the influence of weighting parameters on the total reward function is discussed. Extensive simulation results and practical experiments verify that the proposed Predator-Prey reward based $Q$ -Learning Coverage Path Planning (PP-$Q$ -Learning based CPP in short) has better performance than traditional BCD and $Q$ -Learning based CPP in terms of repetition ratio and turns number.

Cite

CITATION STYLE

APA

Zhang, M., Cai, W., & Pang, L. (2023). Predator-Prey Reward Based Q-Learning Coverage Path Planning for Mobile Robot. IEEE Access, 11, 29673–29683. https://doi.org/10.1109/ACCESS.2023.3255007

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free