HLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classification

Abstract Classifying hyperspectral images (HSIs) is a key challenge in remote sensing, with convolutional neural networks (CNNs) and transformer models becoming leading techniques in this area. CNNs, while effective, often struggle to adequately capture intricate semantic features, and increasing ne...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaoqing Wan, Yupeng He, Feng Chen, Ziqi Sun, Dongtao Mo
Format: Article
Language:English
Published: Springer 2025-06-01
Series:Journal of King Saud University: Computer and Information Sciences
Subjects:
Online Access:https://doi.org/10.1007/s44443-025-00060-z
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849235358116478976
author Xiaoqing Wan
Yupeng He
Feng Chen
Ziqi Sun
Dongtao Mo
author_facet Xiaoqing Wan
Yupeng He
Feng Chen
Ziqi Sun
Dongtao Mo
author_sort Xiaoqing Wan
collection DOAJ
description Abstract Classifying hyperspectral images (HSIs) is a key challenge in remote sensing, with convolutional neural networks (CNNs) and transformer models becoming leading techniques in this area. CNNs, while effective, often struggle to adequately capture intricate semantic features, and increasing network depth leads to significantly higher computational costs. Conversely, transformers, despite their efficacy in modeling spectral-spatial dependencies, introduce significant computational overhead due to their complexity. Mamba, leveraging the state space model (SSM), presents a compelling alternative that efficiently captures long-range dependencies in HSIs while ensuring computational efficiency with linear complexity. To improve the classification performance of HSIs by simultaneously extracting rich local and global spatial-spectral features, as well as deep semantic features, while reducing the computational complexity of the model, this paper proposes an innovative hybrid large selective kernel and convolutional additive self-attention model (HLSK-CASMamba) for HSI classification. First, we design a feature extraction module that combines a 3D convolution layer, a 2D convolution layer, and a large selective kernel (LSK) network, enabling the efficient extraction of both depth-related and spatial details information from HSIs. Second, we propose a novel CASMamba model, with its core module, CAS-VSSM, combining convolutional additive self-attention (CAS) and the vision state-space sequence model (VSSM). This fusion leverages the local feature extraction of convolutions, spatial dependency modeling of self-attention, and long-range dependency handling of VSSM, enhancing the capture of both local and global context while ensuring computational efficiency. Finally, we incorporate the KANLinear module to replace the traditional linear layer, enhancing sample label acquisition. Extensive evaluations on three popular HSIs show that, under 10% training samples, the proposed method achieves 99.57% accuracy on the Houston 2013 dataset, 99.96% on the Botswana dataset, and 99.92% on the University of Pavia dataset, outperforming various existing advanced techniques.
format Article
id doaj-art-0acd62349f6041a2ae5fe435eb602075
institution Kabale University
issn 1319-1578
2213-1248
language English
publishDate 2025-06-01
publisher Springer
record_format Article
series Journal of King Saud University: Computer and Information Sciences
spelling doaj-art-0acd62349f6041a2ae5fe435eb6020752025-08-20T04:02:49ZengSpringerJournal of King Saud University: Computer and Information Sciences1319-15782213-12482025-06-0137412010.1007/s44443-025-00060-zHLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classificationXiaoqing Wan0Yupeng He1Feng Chen2Ziqi Sun3Dongtao Mo4College of Computer Science and Technology, Hengyang Normal UniversityCollege of Computer Science and Technology, Hengyang Normal UniversityCollege of Computer Science and Technology, Hengyang Normal UniversityCollege of Computer Science and Technology, Hengyang Normal UniversityCollege of Computer Science and Technology, Hengyang Normal UniversityAbstract Classifying hyperspectral images (HSIs) is a key challenge in remote sensing, with convolutional neural networks (CNNs) and transformer models becoming leading techniques in this area. CNNs, while effective, often struggle to adequately capture intricate semantic features, and increasing network depth leads to significantly higher computational costs. Conversely, transformers, despite their efficacy in modeling spectral-spatial dependencies, introduce significant computational overhead due to their complexity. Mamba, leveraging the state space model (SSM), presents a compelling alternative that efficiently captures long-range dependencies in HSIs while ensuring computational efficiency with linear complexity. To improve the classification performance of HSIs by simultaneously extracting rich local and global spatial-spectral features, as well as deep semantic features, while reducing the computational complexity of the model, this paper proposes an innovative hybrid large selective kernel and convolutional additive self-attention model (HLSK-CASMamba) for HSI classification. First, we design a feature extraction module that combines a 3D convolution layer, a 2D convolution layer, and a large selective kernel (LSK) network, enabling the efficient extraction of both depth-related and spatial details information from HSIs. Second, we propose a novel CASMamba model, with its core module, CAS-VSSM, combining convolutional additive self-attention (CAS) and the vision state-space sequence model (VSSM). This fusion leverages the local feature extraction of convolutions, spatial dependency modeling of self-attention, and long-range dependency handling of VSSM, enhancing the capture of both local and global context while ensuring computational efficiency. Finally, we incorporate the KANLinear module to replace the traditional linear layer, enhancing sample label acquisition. Extensive evaluations on three popular HSIs show that, under 10% training samples, the proposed method achieves 99.57% accuracy on the Houston 2013 dataset, 99.96% on the Botswana dataset, and 99.92% on the University of Pavia dataset, outperforming various existing advanced techniques.https://doi.org/10.1007/s44443-025-00060-zHyperspectral imageClassificationConvolutional neural networks (CNNs)Large selective kernelConv additive self-attentionMamba
spellingShingle Xiaoqing Wan
Yupeng He
Feng Chen
Ziqi Sun
Dongtao Mo
HLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classification
Journal of King Saud University: Computer and Information Sciences
Hyperspectral image
Classification
Convolutional neural networks (CNNs)
Large selective kernel
Conv additive self-attention
Mamba
title HLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classification
title_full HLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classification
title_fullStr HLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classification
title_full_unstemmed HLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classification
title_short HLSK-CASMamba: hybrid large selective kernel and convolutional additive self-attention mamba for hyperspectral image classification
title_sort hlsk casmamba hybrid large selective kernel and convolutional additive self attention mamba for hyperspectral image classification
topic Hyperspectral image
Classification
Convolutional neural networks (CNNs)
Large selective kernel
Conv additive self-attention
Mamba
url https://doi.org/10.1007/s44443-025-00060-z
work_keys_str_mv AT xiaoqingwan hlskcasmambahybridlargeselectivekernelandconvolutionaladditiveselfattentionmambaforhyperspectralimageclassification
AT yupenghe hlskcasmambahybridlargeselectivekernelandconvolutionaladditiveselfattentionmambaforhyperspectralimageclassification
AT fengchen hlskcasmambahybridlargeselectivekernelandconvolutionaladditiveselfattentionmambaforhyperspectralimageclassification
AT ziqisun hlskcasmambahybridlargeselectivekernelandconvolutionaladditiveselfattentionmambaforhyperspectralimageclassification
AT dongtaomo hlskcasmambahybridlargeselectivekernelandconvolutionaladditiveselfattentionmambaforhyperspectralimageclassification