Data Fusion for Cross-Domain Real-Time Object Detection on the Edge

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

We investigate an edge-computing scenario for robot control, where two similar neural networks are running on one computational node. We test the feasibility of using a single object-detection model (YOLOv5) with the benefit of reduced computational resources against the potentially more accurate independent and specialized models. Our results show that using one single convolutional neural network (for object detection and hand-gesture classification) instead of two separate ones can reduce resource usage by almost (Formula presented.). For many classes, we observed an increase in accuracy when using the model trained with more labels. For small datasets (a few hundred instances per label), we found that it is advisable to add labels with many instances from another dataset to increase detection accuracy.

Cite

CITATION STYLE

APA

Kovalenko, M., Przewozny, D., Eisert, P., Bosse, S., & Chojecki, P. (2023). Data Fusion for Cross-Domain Real-Time Object Detection on the Edge. Sensors, 23(13). https://doi.org/10.3390/s23136138

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free