DSGAU: Dual-Scale Graph Attention U-Nets for Hyperspectral Image Classification With Limited Samples
Graph convolutional networks (GCNs) exhibit remarkable capabilities in hyperspectral image (HSI) classification tasks, primarily due to their ability to establish long-range pixel correlations. This enables the simultaneous learning of local spectral features and global contextual patterns within HS...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11039726/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Graph convolutional networks (GCNs) exhibit remarkable capabilities in hyperspectral image (HSI) classification tasks, primarily due to their ability to establish long-range pixel correlations. This enables the simultaneous learning of local spectral features and global contextual patterns within HSI data. However, the convolutional operations in traditional GCNs require the inclusion of all data points during graph construction, leading to significant computational overhead, particularly for large-scale datasets. To this challenge, graph pooling effectively mitigates the high computational costs of GCNs through hierarchical downsampling, adaptive node selection, and feature preservation mechanisms. Nevertheless, prevalent graph pooling techniques often employ single-scale strategies that inadequately capture multiscale features, potentially leading to information loss or redundancy. To address this issue, we propose a dual-scale graph attention U-Nets for HSI classification with limited samples. First, we design a cross-scale and self-attention module to reduce the graph structure while extracting detailed information from HSI for graph construction,where a cross-scale attention branch establishes inter-level feature correlations while a self-attention branch enhances intra-level contextual learning. Second, we design a dual-scale constrained graph U-Nets encoder, and use an attention feature fusion module dynamically weights these multiscale representations using channel-wise attention coefficients, effectively resolving feature redundancy issues. Finally, we introduce the graph attention network with contrastive normalization layer module to replace traditional GCNs, enabling dynamic graph structure updating during propagation alleviating over-smoothing through differential feature enhancement. The classification performance of the proposed method is evaluated on four benchmark datasets. Experimental results show that, the proposed method outperforms existing cutting-edge methods, with improvements in terms of overall accuracy (OA) around 2.37–17.8% (Indian Pines), 1.05%–10.3% (Salinas), and 1.54%–9.64% (WHU-Hi-LongKou) under limited training sample conditions. |
|---|---|
| ISSN: | 1939-1404 2151-1535 |