Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions

Abstract Deep neural networks have significantly enhanced visual data-based fire detection systems. However, high false alarm rates, shallow-layered networks, and poor recognition in challenging environments continue to hinder their practical deployment. To address these limitations, we introduce th...

Full description

Saved in:
Bibliographic Details
Main Authors: Inam Ullah, Nada Alzaben, Yousef Ibrahim Daradkeh, Mi Young Lee
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-09713-5
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849238493012688896
author Inam Ullah
Nada Alzaben
Yousef Ibrahim Daradkeh
Mi Young Lee
author_facet Inam Ullah
Nada Alzaben
Yousef Ibrahim Daradkeh
Mi Young Lee
author_sort Inam Ullah
collection DOAJ
description Abstract Deep neural networks have significantly enhanced visual data-based fire detection systems. However, high false alarm rates, shallow-layered networks, and poor recognition in challenging environments continue to hinder their practical deployment. To address these limitations, we introduce the Attention-Enhanced Fire Recognition Network (AEFRN). This novel progressive attention-over-attention framework achieves state-of-the-art (SOTA) performance while maintaining computational efficiency. Our approach introduces three key innovations: Firstly, Convolutional Self-Attention (CSA), integrating global self-attention with convolution through dynamic kernels and trainable filters for enhanced low-level fire feature processing. Secondly, Recursive Atrous Self-Attention (RASA) with optimized dilation rates, capturing comprehensive multi-scale contextual information through a recursive formulation with minimal parameter overhead. Thirdly, an enhanced Convolutional Block Attention Module (CBAM) with modified channel and spatial attention mechanisms for robust feature discrimination. We validate AEFRN’s interpretability using Grad-CAM visualization, demonstrating effective attention focus on fire-relevant regions. Comprehensive experimental evaluation on FD and BoWFire benchmark datasets shows AEFRN’s superiority over SOTA methods, achieving 99.11% accuracy on the FD dataset, and 97.98% accuracy on the BoWFire dataset. Extensive comparisons against twelve SOTA approaches confirm AEFRN’s effectiveness for fire detection in challenging scenarios while maintaining computational efficiency suitable for practical deployment.
format Article
id doaj-art-d754eb5d0f644776ae0dcde1c7864cb1
institution Kabale University
issn 2045-2322
language English
publishDate 2025-07-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-d754eb5d0f644776ae0dcde1c7864cb12025-08-20T04:01:35ZengNature PortfolioScientific Reports2045-23222025-07-0115111410.1038/s41598-025-09713-5Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditionsInam Ullah0Nada Alzaben1Yousef Ibrahim Daradkeh2Mi Young Lee3Department of Computer Engineering, Gachon UniversityDepartment of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman UniversityDepartment of Computer Engineering and Information, College of Engineering in Wadi Alddawasir, Prince Sattam bin Abdulaziz UniversityOffice of the Research, Chung-Ang UniversityAbstract Deep neural networks have significantly enhanced visual data-based fire detection systems. However, high false alarm rates, shallow-layered networks, and poor recognition in challenging environments continue to hinder their practical deployment. To address these limitations, we introduce the Attention-Enhanced Fire Recognition Network (AEFRN). This novel progressive attention-over-attention framework achieves state-of-the-art (SOTA) performance while maintaining computational efficiency. Our approach introduces three key innovations: Firstly, Convolutional Self-Attention (CSA), integrating global self-attention with convolution through dynamic kernels and trainable filters for enhanced low-level fire feature processing. Secondly, Recursive Atrous Self-Attention (RASA) with optimized dilation rates, capturing comprehensive multi-scale contextual information through a recursive formulation with minimal parameter overhead. Thirdly, an enhanced Convolutional Block Attention Module (CBAM) with modified channel and spatial attention mechanisms for robust feature discrimination. We validate AEFRN’s interpretability using Grad-CAM visualization, demonstrating effective attention focus on fire-relevant regions. Comprehensive experimental evaluation on FD and BoWFire benchmark datasets shows AEFRN’s superiority over SOTA methods, achieving 99.11% accuracy on the FD dataset, and 97.98% accuracy on the BoWFire dataset. Extensive comparisons against twelve SOTA approaches confirm AEFRN’s effectiveness for fire detection in challenging scenarios while maintaining computational efficiency suitable for practical deployment.https://doi.org/10.1038/s41598-025-09713-5Fire detectionExplainable AIVisual sensorsAttention mechanismsDeep learningEdge computing
spellingShingle Inam Ullah
Nada Alzaben
Yousef Ibrahim Daradkeh
Mi Young Lee
Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions
Scientific Reports
Fire detection
Explainable AI
Visual sensors
Attention mechanisms
Deep learning
Edge computing
title Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions
title_full Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions
title_fullStr Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions
title_full_unstemmed Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions
title_short Optimal features assisted multi-attention fusion for robust fire recognition in adverse conditions
title_sort optimal features assisted multi attention fusion for robust fire recognition in adverse conditions
topic Fire detection
Explainable AI
Visual sensors
Attention mechanisms
Deep learning
Edge computing
url https://doi.org/10.1038/s41598-025-09713-5
work_keys_str_mv AT inamullah optimalfeaturesassistedmultiattentionfusionforrobustfirerecognitioninadverseconditions
AT nadaalzaben optimalfeaturesassistedmultiattentionfusionforrobustfirerecognitioninadverseconditions
AT yousefibrahimdaradkeh optimalfeaturesassistedmultiattentionfusionforrobustfirerecognitioninadverseconditions
AT miyounglee optimalfeaturesassistedmultiattentionfusionforrobustfirerecognitioninadverseconditions