The recognition of fire at its early stages and stopping it from causing socioeconomic and environmental disasters remains a demanding task. Despite the availability of convincing networks, there is a need to develop a lightweight network for resource-constraint devices rather than real-time fire detection in smart city contexts. To overcome this shortcoming, we presented a novel efficient lightweight network called FlameNet for fire detection in a smart city environment. Our proposed network works via two main steps: first, it detects the fire using the FlameNet; then, an alert is initiated and directed to the fire, medical, and rescue departments. Furthermore, we incorporate the MSA module to efficiently prioritize and enhance relevant fire-related prominent features for effective fire detection. The newly developed Ignited-Flames dataset is utilized to undertake a thorough analysis of several convolutional neural network (CNN) models. Additionally, the proposed FlameNet achieves 99.40% accuracy for fire detection. The empirical findings and analysis of multiple factors such as model accuracy, size, and processing time prove that the suggested model is suitable for fire detection.
CITATION STYLE
Nadeem, M., Dilshad, N., Alghamdi, N. S., Dang, L. M., Song, H. K., Nam, J., & Moon, H. (2023). Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment. Smart Cities, 6(5), 2245–2259. https://doi.org/10.3390/smartcities6050103
Mendeley helps you to discover research relevant for your work.