A lightweight high-frequency mamba network for image super-resolution
Abstract After continuous development, many researchers are exploring how to better utilize global and local information in single image super-resolution (SISR). Various methods based on convolutional neural network (CNN) and Transformer structures have emerged, but few studies have mentioned how to...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-11663-x |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849343261975511040 |
|---|---|
| author | Tao Wu Wei Xu Yajuan Wu |
| author_facet | Tao Wu Wei Xu Yajuan Wu |
| author_sort | Tao Wu |
| collection | DOAJ |
| description | Abstract After continuous development, many researchers are exploring how to better utilize global and local information in single image super-resolution (SISR). Various methods based on convolutional neural network (CNN) and Transformer structures have emerged, but few studies have mentioned how to combine these two parts of information. We study the use of self-attention mechanism to integrate local and global information, aiming to make the model better balance the weights of the two parts of information. At the same time, in order to avoid the huge amount of computation brought by Transformer, we use the selective state space model VMamba to extract global information to achieve the effect of reducing computational complexity and lightweight network. Based on the above situation, we propose a High-frequency Mamba Network (HFMN) for SISR, which includes the local high-frequency extraction module Local High-Frequency Feature Block (LHFB), the global feature extraction module Mamba-Based Attention Block (MAB) based on VMamba, and the dual attention fusion module Dual-information Interactive Attention Block (DIAB). It can better incorporate local and global information and has linear complexity in the global feature extraction branch. Experiments on multiple benchmark datasets demonstrate that the network outforms recent SOTA methods in SISR while using fewer parameters. All codes are available at https://github.com/taoWuuu/HFMN . |
| format | Article |
| id | doaj-art-cf85208fd61240ac8364850dcc4c98aa |
| institution | Kabale University |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-cf85208fd61240ac8364850dcc4c98aa2025-08-20T03:43:02ZengNature PortfolioScientific Reports2045-23222025-07-0115111210.1038/s41598-025-11663-xA lightweight high-frequency mamba network for image super-resolutionTao Wu0Wei Xu1Yajuan Wu2School of Electronic Information Engineering, China West Normal UniversitySchool of Computer Science and Technology, China West Normal UniversitySchool of Computer Science and Technology, China West Normal UniversityAbstract After continuous development, many researchers are exploring how to better utilize global and local information in single image super-resolution (SISR). Various methods based on convolutional neural network (CNN) and Transformer structures have emerged, but few studies have mentioned how to combine these two parts of information. We study the use of self-attention mechanism to integrate local and global information, aiming to make the model better balance the weights of the two parts of information. At the same time, in order to avoid the huge amount of computation brought by Transformer, we use the selective state space model VMamba to extract global information to achieve the effect of reducing computational complexity and lightweight network. Based on the above situation, we propose a High-frequency Mamba Network (HFMN) for SISR, which includes the local high-frequency extraction module Local High-Frequency Feature Block (LHFB), the global feature extraction module Mamba-Based Attention Block (MAB) based on VMamba, and the dual attention fusion module Dual-information Interactive Attention Block (DIAB). It can better incorporate local and global information and has linear complexity in the global feature extraction branch. Experiments on multiple benchmark datasets demonstrate that the network outforms recent SOTA methods in SISR while using fewer parameters. All codes are available at https://github.com/taoWuuu/HFMN .https://doi.org/10.1038/s41598-025-11663-xInteractive attentionImage Super-ResolutionDual branch fusionVisual Mamba |
| spellingShingle | Tao Wu Wei Xu Yajuan Wu A lightweight high-frequency mamba network for image super-resolution Scientific Reports Interactive attention Image Super-Resolution Dual branch fusion Visual Mamba |
| title | A lightweight high-frequency mamba network for image super-resolution |
| title_full | A lightweight high-frequency mamba network for image super-resolution |
| title_fullStr | A lightweight high-frequency mamba network for image super-resolution |
| title_full_unstemmed | A lightweight high-frequency mamba network for image super-resolution |
| title_short | A lightweight high-frequency mamba network for image super-resolution |
| title_sort | lightweight high frequency mamba network for image super resolution |
| topic | Interactive attention Image Super-Resolution Dual branch fusion Visual Mamba |
| url | https://doi.org/10.1038/s41598-025-11663-x |
| work_keys_str_mv | AT taowu alightweighthighfrequencymambanetworkforimagesuperresolution AT weixu alightweighthighfrequencymambanetworkforimagesuperresolution AT yajuanwu alightweighthighfrequencymambanetworkforimagesuperresolution AT taowu lightweighthighfrequencymambanetworkforimagesuperresolution AT weixu lightweighthighfrequencymambanetworkforimagesuperresolution AT yajuanwu lightweighthighfrequencymambanetworkforimagesuperresolution |