DLNet: A Dual-Level Network with Self- and Cross-Attention for High-Resolution Remote Sensing Segmentation

With advancements in remote sensing technologies, high-resolution imagery has become increasingly accessible, supporting applications in urban planning, environmental monitoring, and precision agriculture. However, semantic segmentation of such imagery remains challenging due to complex spatial stru...

Full description

Saved in:
Bibliographic Details
Main Authors: Weijun Meng, Lianlei Shan, Sugang Ma, Dan Liu, Bin Hu
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/17/7/1119
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With advancements in remote sensing technologies, high-resolution imagery has become increasingly accessible, supporting applications in urban planning, environmental monitoring, and precision agriculture. However, semantic segmentation of such imagery remains challenging due to complex spatial structures, fine-grained details, and land cover variations. Existing methods often struggle with ineffective feature representation, suboptimal fusion of global and local information, and high computational costs, limiting segmentation accuracy and efficiency. To address these challenges, we propose the dual-level network (DLNet), an enhanced framework incorporating self-attention and cross-attention mechanisms for improved multi-scale feature extraction and fusion. The self-attention module captures long-range dependencies to enhance contextual understanding, while the cross-attention module facilitates bidirectional interaction between global and local features, improving spatial coherence and segmentation quality. Additionally, DLNet optimizes computational efficiency by balancing feature refinement and memory consumption, making it suitable for large-scale remote sensing applications. Extensive experiments on benchmark datasets, including DeepGlobe and Inria Aerial, demonstrate that DLNet achieves state-of-the-art segmentation accuracy while maintaining computational efficiency. On the DeepGlobe dataset, DLNet achieves a <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>76.9</mn><mo>%</mo></mrow></semantics></math></inline-formula> mean intersection over union (mIoU), outperforming existing models such as GLNet (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>71.6</mn><mo>%</mo></mrow></semantics></math></inline-formula>) and EHSNet (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>76.3</mn><mo>%</mo></mrow></semantics></math></inline-formula>), while requiring lower memory (1443 MB) and maintaining a competitive inference speed of 518.3 ms per image. On the Inria Aerial dataset, DLNet attains an mIoU of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>73.6</mn><mo>%</mo></mrow></semantics></math></inline-formula>, surpassing GLNet (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>71.2</mn><mo>%</mo></mrow></semantics></math></inline-formula>) while reducing computational cost and achieving an inference speed of 119.4 ms per image. These results highlight DLNet’s effectiveness in achieving precise and efficient segmentation in high-resolution remote sensing imagery.
ISSN:2072-4292