End‐to‐end deep reinforcement learning for image‐based UAV autonomous control

9Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

To achieve the perception‐based autonomous control of UAVs, schemes with onboard sensing and computing are popular in state‐of‐the‐art work, which often consist of several separated modules with respective complicated algorithms. Most methods depend on handcrafted designs and prior models with little capacity for adaptation and generalization. Inspired by the research on deep reinforcement learning, this paper proposes a new end‐to‐end autonomous control method to simplify the separate modules in the traditional control pipeline into a single neural network. An image‐based reinforcement learning framework is established, depending on the design of the network architecture and the reward function. Training is performed with model‐free algorithms de-veloped according to the specific mission, and the control policy network can map the input image directly to the continuous actuator control command. A simulation environment for the scenario of UAV landing was built. In addition, the results under different typical cases, including both the small and large initial lateral or heading angle offsets, show that the proposed end‐to‐end method is feasible for perception‐based autonomous control.

Cite

CITATION STYLE

APA

Zhao, J., Sun, J., Cai, Z., Wang, L., & Wang, Y. (2021). End‐to‐end deep reinforcement learning for image‐based UAV autonomous control. Applied Sciences (Switzerland), 11(18). https://doi.org/10.3390/app11188419

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free