Triple Graph Convolutional Network for Hyperspectral Image Feature Fusion and Classification

Most graph-based networks utilize superpixel generation methods as a preprocessing step, considering superpixels as graph nodes. In the case of hyperspectral images having high variability in spectral features, considering an image region as a graph node may degrade the class discrimination ability...

Full description

Saved in:
Bibliographic Details
Main Authors: Maryam Imani, Daniele Cerra
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/17/9/1623
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Most graph-based networks utilize superpixel generation methods as a preprocessing step, considering superpixels as graph nodes. In the case of hyperspectral images having high variability in spectral features, considering an image region as a graph node may degrade the class discrimination ability of networks for pixel-based classification. Moreover, most graph-based networks focus on global feature extraction, while both local and global information are important for pixel-based classification. To deal with these challenges, superpixel-based graphs are overruled in this work, and a Graph-based Feature Fusion (GF2) method relying on three different graphs is proposed instead. A local patch is considered around each pixel under test, and at the same time, global anchors with the highest informational content are selected from the entire scene. While the first graph explores relationships between neighboring pixels in the local patch and the global anchors, the second and third graphs use the global anchors and pixels of the local patch as nodes, respectively. These graphs are processed using graph convolutional networks, and their results are fused using a cross-attention mechanism. The experiments on three hyperspectral benchmark datasets show that the GF2 network has high classification performance compared to state-of-the-art methods, while imposing a reasonable number of learnable parameters.
ISSN:2072-4292