DiffFormer: A Differential Spatial-Spectral Transformer for Hyperspectral Image Classification

Hyperspectral image classification (HSIC) presents significant challenges due to spectral redundancy and spatial discontinuity, both of which can negatively impact classification performance. To mitigate these issues, this work proposes the differential spatial-spectral transformer (<italic>Di...

Full description

Saved in:
Bibliographic Details
Main Authors: Muhammad Ahmad, Manuel Mazzara, Salvatore Distefano, Adil Mehmood Khan, Silvia Liberata Ullo
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10955699/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Hyperspectral image classification (HSIC) presents significant challenges due to spectral redundancy and spatial discontinuity, both of which can negatively impact classification performance. To mitigate these issues, this work proposes the differential spatial-spectral transformer (<italic>DiffFormer</italic>), a novel framework designed to enhance feature discrimination and improve classification accuracy. At its core, <italic>DiffFormer</italic> incorporates a differential multihead self-attention mechanism, which accentuates subtle spectral-spatial variations by applying differential attention across neighboring patches. The architecture integrates spectral-spatial tokenization, utilizing 3-D convolution-based patch embeddings, positional encoding, and a stack of transformer layers augmented with the SwiGLU activation function&#x2014;a variant of the gated linear unit&#x2014;to enable efficient and expressive feature extraction. In addition, a token-based classification head ensures robust representation learning, facilitating precise pixelwise labeling. Extensive experiments on benchmark hyperspectral datasets demonstrate that <italic>DiffFormer</italic> consistently outperforms state-of-the-art methods in classification accuracy, computational efficiency, and generalizability.
ISSN:1939-1404
2151-1535