SEAP: squeeze-and-excitation attention guided pruning for lightweight steganalysis networks

Abstract In recent years, the increasing computational and storage demands of deep steganalysis models have drawn attention to lightweight architectures. While pruning algorithms for image steganalysis networks have been proposed, they often do not apply to networks equipped with mobile inverted bot...

Full description

Saved in:
Bibliographic Details
Main Authors: Qiushi Li, Shenghai Luo, Shunquan Tan, Zhenjun Li
Format: Article
Language:English
Published: SpringerOpen 2025-08-01
Series:EURASIP Journal on Information Security
Subjects:
Online Access:https://doi.org/10.1186/s13635-025-00212-8
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract In recent years, the increasing computational and storage demands of deep steganalysis models have drawn attention to lightweight architectures. While pruning algorithms for image steganalysis networks have been proposed, they often do not apply to networks equipped with mobile inverted bottleneck (MBConv) structures, such as EfficientNet. In this paper, we propose a Squeeze-and-Excitation Attention-based Pruning framework for image steganalysis networks, named SEAP. The method adopts a block-wise structured pruning strategy guided by the SE channel attention mechanism, where unimportant channels within each MBConv block are identified based on SE attention values and soft masks. Since pruning is conducted independently within each MBConv block and the input/output dimensions of the block remain unchanged, potential pruning conflicts across blocks are effectively avoided. In addition, we propose a sparsity regularization mechanism that adaptively adjusts the regularization strength based on the network structure, helping to preserve detection performance. Extensive experimental results demonstrate that the pruned network retains only a small fraction of the original network’s parameters and computational costs while achieving performance comparable to the original unpruned networks.
ISSN:2510-523X