A deep neural network sensor for visual servoing in 3D spaces

14Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

This paper describes a novel stereo vision sensor based on deep neural networks, that can be used to produce a feedback signal for visual servoing in unmanned aerial vehicles such as drones. Two deep convolutional neural networks attached to the stereo camera in the drone are trained to detect wind turbines in images and stereo triangulation is used to calculate the distance from a wind turbine to the drone. Our experimental results show that the sensor produces data accurate enough to be used for servoing, even in the presence of noise generated when the drone is not being completely stable. Our results also show that appropriate filtering of the signals is needed and that to produce correct results, it is very important to keep the wind turbine within the field of vision of both cameras, so that both deep neural networks could detect it.

Cite

CITATION STYLE

APA

Durdevic, P., & Ortiz-Arroyo, D. (2020). A deep neural network sensor for visual servoing in 3D spaces. Sensors (Switzerland), 20(5). https://doi.org/10.3390/s20051437

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free