A Semantic-Guided Cross-Attention Network for Change Detection in High-Resolution Remote Sensing Images

Remote sensing change detection (CD) involves identifying differences between two satellite images of the same geographic area taken at different times. It plays a critical role in applications such as urban planning and disaster management. Traditional CD methods rely on manually extracted features...

Full description

Saved in:
Bibliographic Details
Main Authors: Guowei Lu, Shunyu Yao, Yao Li, Jinbo Tang, Guangyuan Kan, Tao Sun, Changjun Liu, Deqiang Cheng, Ruilong Wei
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/17/10/1749
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Remote sensing change detection (CD) involves identifying differences between two satellite images of the same geographic area taken at different times. It plays a critical role in applications such as urban planning and disaster management. Traditional CD methods rely on manually extracted features, which often lack robustness and accuracy in capturing the details of objects. Recently, deep learning-based methods have expanded the applications of CD in high-resolution remote sensing images, yet they struggle to fully utilize the multi-level features extracted by backbone networks, limiting their performance. To address this challenge, we propose a Semantic-Guided Cross-Attention Network (SCANet). It introduces a Hierarchical Semantic-Guided Fusion (HSF) module, which leverages high-level semantic information to guide low-level spatial details through an attention mechanism. Additionally, we design a Cross-Attention Feature Fusion (CAFF) module to establish global correlations between bitemporal images, thereby improving feature interaction. Extensive experiments on the IWHR-data and LEVIR-CD datasets demonstrate that SCANet significantly outperforms existing State-of-the-Art (SOTA) methods. Specifically, the F1-score and the Intersection over Union (IoU) score are improved by 2.002% and 3.297% on the IWHR-data dataset and by 0.761% and 1.276% on the LEVIR-CD dataset, respectively. These results validate the effectiveness of semantic-guided fusion and cross-attention feature interaction, providing new insights for advancing change detection research in high-resolution remote sensing imagery.
ISSN:2072-4292