Archangel: A Hybrid UAV-Based Human Detection Benchmark with Position and Pose Metadata

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Learning to detect objects, such as humans, in imagery captured by an unmanned aerial vehicle (UAV) usually suffers from tremendous variations caused by the UAV's position towards the objects. In addition, existing UAV-based benchmark datasets do not provide adequate dataset metadata, which is essential for precise model diagnosis and learning features invariant to those variations. In this paper, we introduce Archangel, the first UAV-based object detection dataset composed of real and synthetic subsets captured with similar imagining conditions and UAV position and object pose metadata. A series of experiments are carefully designed with a state-of-the-art object detector to demonstrate the benefits of leveraging the metadata during model evaluation. Moreover, several crucial insights involving both real and synthetic data during model optimization are presented. In the end, we discuss the advantages, limitations, and future directions regarding Archangel to highlight its distinct value for the broader machine learning community.

Cite

CITATION STYLE

APA

Shen, Y. T., Lee, Y., Kwon, H., Conover, D. M., Bhattacharyya, S. S., Vale, N., … Skirlo, F. (2023). Archangel: A Hybrid UAV-Based Human Detection Benchmark with Position and Pose Metadata. IEEE Access, 11, 80958–80972. https://doi.org/10.1109/ACCESS.2023.3299235

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free