GCSA-ResNet: a deep neural network architecture for Malware detection

Abstract With the exponential growth in the quantity and complexity of malware, traditional detection methods face severe challenges. This paper proposes GCSA-ResNet, a novel deep learning model that significantly enhances malware detection performance by integrating the Global Channel-Spatial Atten...

Full description

Saved in:
Bibliographic Details
Main Authors: Yukang Fan, Kun Zhang, Bing Zheng, Yu Zhou, Jinyang Zhou, Wenting Pan
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-10561-6
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract With the exponential growth in the quantity and complexity of malware, traditional detection methods face severe challenges. This paper proposes GCSA-ResNet, a novel deep learning model that significantly enhances malware detection performance by integrating the Global Channel-Spatial Attention (GCSA) module with ResNet-50. The core innovation lies in the GCSA module, which for the first time collaboratively designs channel attention, channel shuffling, and spatial attention mechanisms to simultaneously capture local texture features and global dependency relationships in visualized malware images. Compared with existing attention models such as SE and CBAM, GCSA strengthens cross-channel information interaction through channel shuffling operations and employs spatial attention with a 7 × 7 convolutional kernel to more effectively model long-range spatial correlations. Experiments on the Malimg and Microsoft BIG 2015 datasets demonstrate that GCSA-ResNet achieves over 98.50% accuracy, representing a performance improvement of more than 0.5% compared to baseline models. Quantitative results show that the model maintains stable performance in precision, recall, and F1-score, while reducing false positive rates by 40–50%. These advancements effectively address the limitations of existing methods in feature degradation and cross-family misclassification.
ISSN:2045-2322