Hyperspectral and Multispectral Remote Sensing Image Fusion Based on a Retractable Spatial–Spectral Transformer Network

Hyperspectral and multispectral remote sensing image fusion is an optimal approach for generating hyperspectral–spatial-resolution images, effectively overcoming the physical limitations of sensors. In transformer-based image fusion methods constrained by the local window self-attention mechanism, t...

Full description

Saved in:
Bibliographic Details
Main Authors: Yilin He, Heng Li, Miaosen Zhang, Shuangqi Liu, Chunyu Zhu, Bingxia Xin, Jun Wang, Qiong Wu
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/17/12/1973
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Hyperspectral and multispectral remote sensing image fusion is an optimal approach for generating hyperspectral–spatial-resolution images, effectively overcoming the physical limitations of sensors. In transformer-based image fusion methods constrained by the local window self-attention mechanism, the extraction of global information and coordinated contextual features is often insufficient. Fusion that aims to emphasize spatial–spectral heterogeneous characteristics may significantly enhance the robustness of joint representation for multi-source data. To address these issues, this study proposes a hyperspectral and multispectral remote sensing image fusion method based on a retractable spatial–spectral transformer network (RSST) and introduces the attention retractable mechanism into the field of remote sensing image fusion. Furthermore, a gradient spatial–spectral recovery block is incorporated to effectively mitigate the limitations of token interactions and the loss of spatial–spectral edge information. A series of experiments across multiple scales demonstrate that RSST exhibits significant advantages over existing mainstream image fusion algorithms.
ISSN:2072-4292