Comparing the Accuracy of sUAS Navigation, Image Co-Registration and CNN-Based Damage Detection between Traditional and Repeat Station Imaging

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The application of ultra-high spatial resolution imagery from small unpiloted aerial systems (sUAS) can provide valuable information about the status of built infrastructure following natural disasters. This study employs three methods for improving the value of sUAS imagery: (1) repeating the positioning of image stations over time using a bi-temporal imaging approach called repeat station imaging (RSI) (compared here against traditional (non-RSI) imaging), (2) co-registration of bi-temporal image pairs, and (3) damage detection using Mask R-CNN, a convolutional neural network (CNN) algorithm applied to co-registered image pairs. Infrastructure features included roads, buildings, and bridges, with simulated cracks representing damage. The accuracies of platform navigation and camera station positioning, image co-registration, and resultant Mask R-CNN damage detection were assessed for image pairs, derived with RSI and non-RSI acquisition. In all cases, the RSI approach yielded the highest accuracies, with repeated sUAS navigation accuracy within 0.16 m mean absolute error (MAE) horizontally and vertically, image co-registration accuracy of 2.2 pixels MAE, and damage detection accuracy of 83.7% mean intersection over union.

Cite

CITATION STYLE

APA

Loerch, A. C., Stow, D. A., Coulter, L. L., Nara, A., & Frew, J. (2022). Comparing the Accuracy of sUAS Navigation, Image Co-Registration and CNN-Based Damage Detection between Traditional and Repeat Station Imaging. Geosciences (Switzerland), 12(11). https://doi.org/10.3390/geosciences12110401

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free