Dual-Stream Spatially Aware Transformer for Remote Sensing Image Captioning

Remote sensing image captioning (RSIC) aims to generate semantically rich and syntactically accurate descriptions for remote sensing images (RSIs). However, due to the complex spatial layouts, occlusions, and overlapping objects in such images, caption generation is often challenged by semantic ambi...

Full description

Saved in:
Bibliographic Details
Main Authors: Haifeng Sima, Xiangtao Ding, JianLong Wang, Mingliang Xu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11104798/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Remote sensing image captioning (RSIC) aims to generate semantically rich and syntactically accurate descriptions for remote sensing images (RSIs). However, due to the complex spatial layouts, occlusions, and overlapping objects in such images, caption generation is often challenged by semantic ambiguity. To address these issues, we propose a novel <italic>dual-stream spatially aware transformer</italic> (DSAT), which explicitly models both global and local spatial relationships to enhance spatial understanding. Specifically, DSAT introduces a <italic>dual-stream feature interaction</italic> module that extracts grid-level global features and region-level object features, and further enhances their respective spatial dependencies through multibranch convolution and a graph attention network. In addition, we design a spatially aware attention mechanism that encodes relative spatial relationships into the Transformer, allowing the model to better capture object distribution patterns and geometric relationships. Extensive experiments conducted on three benchmark datasets, namely Sydney-Captions, UCM-Captions, and remote sensing image description, highlight the superior performance of DSAT. The proposed method achieves impressive CIDEr scores of 338.59%, 450.93%, and 275.36% on these datasets, respectively, demonstrating its effectiveness in generating high-quality captions for RSIs.
ISSN:1939-1404
2151-1535