Autonomous unmanned systems have become an attractive vehicle for a myriad of military and civilian applications. This can be partly attributed to their ability to bring payloads for utility, sensing, and other uses for various applications autonomously. However, a key challenge in realizing autonomous unmanned systems is the ability to perform complex group missions, which require coordination and collaboration among multiple platforms. This paper presents a cooperative navigating task approach that enables multiple unmanned surface vehicles (multi-USV) to autonomously capture a maneuvering target while avoiding both static and dynamic obstacles. The approach adopts a hybrid multi-agent deep reinforcement learning framework that leverages heuristic mechanisms to guide the group mission learning of the vehicles. Specifically, the proposed framework consists of two stages. In the first stage, navigation subgoal sets are generated based on expert knowledge, and a goal selection heuristic model based on the immune network model is used to select navigation targets during training. Next, the selected goals’ executions are learned using actor-critic proximal policy optimization. The simulation results with multi-USV target capture show that the proposed approach is capable of abstracting and guiding the unmanned vehicle group coordination learning and achieving a generally optimized mission execution.
CITATION STYLE
Nantogma, S., Zhang, S., Yu, X., An, X., & Xu, Y. (2023). Multi-USV Dynamic Navigation and Target Capture: A Guided Multi-Agent Reinforcement Learning Approach. Electronics (Switzerland), 12(7). https://doi.org/10.3390/electronics12071523
Mendeley helps you to discover research relevant for your work.