A VMamba-Based Spatial–Spectral Fusion Network for Remote Sensing Image Classification

In hyperspectral (HS) and light detection and ranging (LiDAR) collaborative classification, HS provides rich spectral information, while LiDAR offers unique elevation data. However, existing methods often focus on feature extraction within individual modalities before fusion, which may bring about i...

Full description

Saved in:
Bibliographic Details
Main Authors: Lan Luo, Yanmei Zhang, Yanbing Xu, Tingxuan Yue, Yuxi Wang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11014540/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In hyperspectral (HS) and light detection and ranging (LiDAR) collaborative classification, HS provides rich spectral information, while LiDAR offers unique elevation data. However, existing methods often focus on feature extraction within individual modalities before fusion, which may bring about insufficient fusion due to a lack of intermodal complementarity and interaction. To address this, we propose a framework for HS and LiDAR fusion classification based on the VMamba model, called SSFN, which includes a dual supplement network (DSN) and a VMamba-based integration network (VMIN), modeling long-range dependencies and fully leveraging the correlation and complementarity of heterogeneous information. The DSN, comprising a spatial supplement network (Spa-SN) and a spectral supplement network (Spe-SN), is devised to supplement missing features for each modality. The Spa-SN complements the spatial features of HS by capturing spatial correlations between LiDAR and HS, and the Spe-SN employs spectral information from HS to compensate for the spectral features missing in LiDAR. Thus, both HS and LiDAR have a comprehensive spatial–spectral description. The VMIN is then utilized for augmentation and interaction of supplemented features, and discriminative features are adaptively selected for classification. Extensive experiments on three benchmark datasets demonstrate that our method outperforms multiple state-of-the-art methods and needs the fewest parameters.
ISSN:1939-1404
2151-1535