AI-powered vision-aided navigation and ground obstacles detection for UAM approach and landing

11Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes a vision-aided navigation and ground obstacle detection pipeline for highly autonomous Vertical Take-Off and Landing aircraft in Urban Air Mobility scenarios. Landing pad and obstacle detection is provided by an ad hoc Convolutional Neural Network based algorithm. After landing pad detection, customized image-based algorithms extract relevant keypoints which are then used for pose estimation. Visual pose is provided as an input to a multi-sensor navigation architecture also integrating inertial and GNSS measurements with the aim to provide high accuracy and integrity. To guarantee accurate visual information up to the final meters of the approach, a multi-scale pattern concept is proposed which modifies the recent proposal from EASA. The navigation and obstacle detection architecture, which includes two cameras and different operative modes, is tested with synthetic data obtained in a highly realistic simulation environment. In addition, scaled experiments with a small hexacopter are exploited for flight validation. Numerical and experimental analyses are thus presented which provide a first evaluation of the architecture performance.

Cite

CITATION STYLE

APA

Miccio, E., Veneruso, P., Opromolla, R., Fasano, G., Tiana, C., & Gentile, G. (2023). AI-powered vision-aided navigation and ground obstacles detection for UAM approach and landing. In AIAA/IEEE Digital Avionics Systems Conference - Proceedings. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/DASC58513.2023.10311321

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free