Human detection methods are widely used in various fields such as autonomous vehicles, video surveillance, and rescue systems. To provide a more effective detection system, different types of sensor data (i.e. optics, thermal, and depth data) may be used together as hybrid information. Fortifying object detection, based on optical data and additional sensor data, such as depth and thermal data, also represents information regarding the distance and temperature of classified objects that can be used for video surveillance, rescue systems, and various applications. In this study, a simple and effective method is introduced to fuse RGB-D and thermal sensor data to achieve a more accurate form of human detection. To accurately combine the sensors, they are physically fixed to each other, and the relationship between them is determined using a novel method. The feature points on the optical and thermal images are extracted and matched successfully using computer vision. The proposed method is completely brand-free, easy to implement, and can be used in real-time applications. Using both thermal and optical data, humans are classified as benefiting from a widely used object detection method. The performance of the presented method is tested with a newly generated dataset. The proposed method boosts human detection accuracy by 5% when compared to the use of only optical data and by 37% when compared to the use of thermal data with COCO Dataset upon YOLOv4 neural network weights. After training with the newly generated dataset, the detection accuracy increases by 18% compared with the best results of single sensor usage.
CITATION STYLE
Ozcan, A., & Cetin, O. (2022). A Novel Fusion Method With Thermal and RGB-D Sensor Data for Human Detection. IEEE Access, 10, 66831–66843. https://doi.org/10.1109/ACCESS.2022.3185402
Mendeley helps you to discover research relevant for your work.