Deep learning quadcopter control via risk-aware active learning

34Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solvers to also allow real-time operation on robots, but the computational cost of such trajectory optimization remains prohibitive for many applications. In this paper we examine a novel deep neural network approximation and validate it on a safe navigation problem with a real nano-quadcopter. As the risk of costly failures is a major concern with real robots, we propose a risk-aware resampling technique. Contrary to prior work this active learning approach is easy to use with existing solvers for trajectory optimization, as well as deep learning. We demonstrate the efficacy of the approach on a difficult collision avoidance problem with non-cooperative moving obstacles. Our findings indicate that the resulting neural network approximations are least 50 times faster than the trajectory optimizer while still satisfying the safety requirements. We demonstrate the potential of the approach by implementing a synthesized deep neural network policy on the nano-quadcopter microcontroller.

Cite

CITATION STYLE

APA

Andersson, O., Wzorek, M., & Doherty, P. (2017). Deep learning quadcopter control via risk-aware active learning. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 3812–3818). AAAI press. https://doi.org/10.1609/aaai.v31i1.11041

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free