YOLO-SF: YOLO for Fire Segmentation Detection

21Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Owing to the problems of missed detection, false detection, and low accuracy of the current fire detection algorithm, a segmentation detection algorithm, YOLO-SF, is proposed. This algorithm combines instance segmentation technology with the YOLOv7-Tiny object detection algorithm to improve its accuracy. We gather images that include both fire and non-fire elements to create a fire segmentation dataset (FSD). The segmentation detection head of YOLOR is adopted to improve the accuracy of model segmentation and enhance its ability to express details. The MobileViTv2 module is introduced to build the backbone network, which effectively reduces parameters while ensuring the network's ability to extract features. The Efficient Layer Aggregation Network (ELAN) of the neck network is augmented with Convolutional Block Attention Module (CBAM) to broaden the receptive field of the model and enhance its attention to both the fire images channel and spatial information. Additionally, Varifocal Loss is used to address the problem of inaccurate object positioning in the edge areas of fire images. Compared with the YOLOv7-Tiny segmentation algorithm, for Box and Mask, the precision increases by 5.9% and 6.2%, recall increases by 2.5% and 3.3%, and mAP increases by 4% and 6%. In addition, the FPS reaches 55.64, satisfying the requirements for real-time detection. The improved algorithm exhibits good generalization performance and robustness.

Cite

CITATION STYLE

APA

Cao, X., Su, Y., Geng, X., & Wang, Y. (2023). YOLO-SF: YOLO for Fire Segmentation Detection. IEEE Access, 11, 111079–111092. https://doi.org/10.1109/ACCESS.2023.3322143

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free