Visual model-predictive localization for computationally efficient autonomous racing of a 72-g drone

28Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Drone racing is becoming a popular e-sport all over the world, and beating the best human drone race pilots has quickly become a new major challenge for artificial intelligence and robotics. In this paper, we propose a novel sensor fusion method called visual model-predictive localization (VML). Within a small time window, VML approximates the error between the model prediction position and the visual measurements as a linear function. Once the parameters of the function are estimated by the RANSAC algorithm, this error model can be used to compensate the prediction in the future. In this way, outliers can be handled efficiently and the vision delay can also be compensated efficiently. Theoretical analysis and simulation results show the clear advantage compared with Kalman filtering when dealing with the occasional large outliers and vision delays that occur in fast drone racing. Flight tests are performed on a tiny racing quadrotor named “Trashcan,” which was equipped with a Jevois smart camera for a total of 72 g. An average speed of 2 m/s is achieved while the maximum speed is 2.6 m/s. To the best of our knowledge, this flying platform is currently the smallest autonomous racing drone in the world, while still being one of the fastest autonomous racing drones.

Cite

CITATION STYLE

APA

Li, S., van der Horst, E., Duernay, P., De Wagter, C., & de Croon, G. C. H. E. (2020). Visual model-predictive localization for computationally efficient autonomous racing of a 72-g drone. Journal of Field Robotics, 37(4), 667–692. https://doi.org/10.1002/rob.21956

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free