Learning unmanned aerial vehicle control for autonomous target following

19Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

While deep reinforcement learning (RL) methods have achieved unprecedented successes in a range of challenging problems, their applicability has been mainly limited to simulation or game domains due to the high sample complexity of the trial-and-error learning process. However, real-world robotic applications often need a data-efficient learning process with safety-critical constraints. In this paper, we consider the challenging problem of learning unmanned aerial vehicle (UAV) control for tracking a moving target. To acquire a strategy that combines perception and control, we represent the policy by a convolutional neural network. We develop a hierarchical approach that combines a model-free policy gradient method with a conventional feedback proportional-integral-derivative (PID) controller to enable stable learning without catastrophic failure. The neural network is trained by a combination of supervised learning from raw images and reinforcement learning from games of self-play. We show that the proposed approach can learn a target following policy in a simulator efficiently and the learned behavior can be successfully transferred to the DJI quadrotor platform for real-world UAV control.

Cite

CITATION STYLE

APA

Li, S., Liu, T., Zhang, C., Yeung, D. Y., & Shen, S. (2018). Learning unmanned aerial vehicle control for autonomous target following. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 4936–4942). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/685

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free