Towards Real-Time Path Planning through Deep Reinforcement Learning for a UAV in Dynamic Environments

154Citations
Citations of this article
156Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Path planning remains a challenge for Unmanned Aerial Vehicles (UAVs) in dynamic environments with potential threats. In this paper, we have proposed a Deep Reinforcement Learning (DRL) approach for UAV path planning based on the global situation information. We have chosen the STAGE Scenario software to provide the simulation environment where a situation assessment model is developed with consideration of the UAV survival probability under enemy radar detection and missile attack. We have employed the dueling double deep Q-networks (D3QN) algorithm that takes a set of situation maps as input to approximate the Q-values corresponding to all candidate actions. In addition, the ε-greedy strategy is combined with heuristic search rules to select an action. We have demonstrated the performance of the proposed method under both static and dynamic task settings.

Cite

CITATION STYLE

APA

Yan, C., Xiang, X., & Wang, C. (2020). Towards Real-Time Path Planning through Deep Reinforcement Learning for a UAV in Dynamic Environments. Journal of Intelligent and Robotic Systems: Theory and Applications, 98(2), 297–309. https://doi.org/10.1007/s10846-019-01073-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free