PICT-Net: A Transformer-Based Network with Prior Information Correction for Hyperspectral Image Unmixing

Transformers have performed favorably in recent hyperspectral unmixing studies in which the self-attention mechanism possesses the ability to retain spectral information and spatial details. However, the lack of reliable prior information for correction guidance has resulted in an inadequate accurac...

Full description

Saved in:
Bibliographic Details
Main Authors: Yiliang Zeng, Na Meng, Jinlin Zou, Wenbin Liu
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/17/5/869
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Transformers have performed favorably in recent hyperspectral unmixing studies in which the self-attention mechanism possesses the ability to retain spectral information and spatial details. However, the lack of reliable prior information for correction guidance has resulted in an inadequate accuracy and robustness of the network. To benefit from the advantages of the Transformer architecture and to improve the interpretability and robustness of the network, a dual-branch network with prior information correction, incorporating a Transformer network (PICT-Net), is proposed. The upper branch utilizes pre-extracted endmembers to provide pure pixel prior information. The lower branch employs a Transformer structure for feature extraction and unmixing processing. A weight-sharing strategy is employed between the two branches to facilitate information sharing. The deep integration of prior knowledge into the Transformer architecture effectively reduces endmember variability in hyperspectral unmixing and enhances the model’s generalization capability and accuracy across diverse scenarios. Experimental results from experiments conducted on four real datasets demonstrate the effectiveness and superiority of the proposed model.
ISSN:2072-4292