SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks

Abstract In recent years, the increasing computational and storage demands of deep steganalysis models have drawn attention to lightweight architectures. While pruning algorithms for image steganalysis networks have been proposed, they often do not apply to networks equipped with mobile inverted bot...

Full description

Saved in:
Bibliographic Details
Main Authors: Qiushi Li, Shenghai Luo, Shunquan Tan, Zhenjun Li
Format: Article
Language:English
Published: SpringerOpen 2025-08-01
Series:EURASIP Journal on Information Security
Subjects:
Online Access:https://doi.org/10.1186/s13635-025-00212-8
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849342553387696128
author Qiushi Li
Shenghai Luo
Shunquan Tan
Zhenjun Li
author_facet Qiushi Li
Shenghai Luo
Shunquan Tan
Zhenjun Li
author_sort Qiushi Li
collection DOAJ
description Abstract In recent years, the increasing computational and storage demands of deep steganalysis models have drawn attention to lightweight architectures. While pruning algorithms for image steganalysis networks have been proposed, they often do not apply to networks equipped with mobile inverted bottleneck (MBConv) structures, such as EfficientNet. In this paper, we propose a Squeeze-and-Excitation Attention-based Pruning framework for image steganalysis networks, named SEAP. The method adopts a block-wise structured pruning strategy guided by the SE channel attention mechanism, where unimportant channels within each MBConv block are identified based on SE attention values and soft masks. Since pruning is conducted independently within each MBConv block and the input/output dimensions of the block remain unchanged, potential pruning conflicts across blocks are effectively avoided. In addition, we propose a sparsity regularization mechanism that adaptively adjusts the regularization strength based on the network structure, helping to preserve detection performance. Extensive experimental results demonstrate that the pruned network retains only a small fraction of the original network’s parameters and computational costs while achieving performance comparable to the original unpruned networks.
format Article
id doaj-art-d16078da112944b5bcfb1762a64f64cd
institution Kabale University
issn 2510-523X
language English
publishDate 2025-08-01
publisher SpringerOpen
record_format Article
series EURASIP Journal on Information Security
spelling doaj-art-d16078da112944b5bcfb1762a64f64cd2025-08-20T03:43:21ZengSpringerOpenEURASIP Journal on Information Security2510-523X2025-08-012025111510.1186/s13635-025-00212-8SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networksQiushi Li0Shenghai Luo1Shunquan Tan2Zhenjun Li3Guangdong Laboratory of Machine Perception and Intelligent Computing, Faculty of Engineering, Shenzhen MSU-BIT UniversityGuangdong Laboratory of Machine Perception and Intelligent Computing, Faculty of Engineering, Shenzhen MSU-BIT UniversitySchool of Information and Communications Technology, Shenzhen City PolytechnicSchool of Information and Communications Technology, Shenzhen City PolytechnicAbstract In recent years, the increasing computational and storage demands of deep steganalysis models have drawn attention to lightweight architectures. While pruning algorithms for image steganalysis networks have been proposed, they often do not apply to networks equipped with mobile inverted bottleneck (MBConv) structures, such as EfficientNet. In this paper, we propose a Squeeze-and-Excitation Attention-based Pruning framework for image steganalysis networks, named SEAP. The method adopts a block-wise structured pruning strategy guided by the SE channel attention mechanism, where unimportant channels within each MBConv block are identified based on SE attention values and soft masks. Since pruning is conducted independently within each MBConv block and the input/output dimensions of the block remain unchanged, potential pruning conflicts across blocks are effectively avoided. In addition, we propose a sparsity regularization mechanism that adaptively adjusts the regularization strength based on the network structure, helping to preserve detection performance. Extensive experimental results demonstrate that the pruned network retains only a small fraction of the original network’s parameters and computational costs while achieving performance comparable to the original unpruned networks.https://doi.org/10.1186/s13635-025-00212-8SteganalysisSteganographyDeep learningConvolutional neural networkNetwork pruning
spellingShingle Qiushi Li
Shenghai Luo
Shunquan Tan
Zhenjun Li
SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks
EURASIP Journal on Information Security
Steganalysis
Steganography
Deep learning
Convolutional neural network
Network pruning
title SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks
title_full SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks
title_fullStr SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks
title_full_unstemmed SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks
title_short SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks
title_sort seap squeeze and excitation attention guided pruning for lightweight steganalysis networks
topic Steganalysis
Steganography
Deep learning
Convolutional neural network
Network pruning
url https://doi.org/10.1186/s13635-025-00212-8
work_keys_str_mv AT qiushili seapsqueezeandexcitationattentionguidedpruningforlightweightsteganalysisnetworks
AT shenghailuo seapsqueezeandexcitationattentionguidedpruningforlightweightsteganalysisnetworks
AT shunquantan seapsqueezeandexcitationattentionguidedpruningforlightweightsteganalysisnetworks
AT zhenjunli seapsqueezeandexcitationattentionguidedpruningforlightweightsteganalysisnetworks