PSDA: Pyramid Spatial Deformable Aggregation for Building Segmentation in Multiview Remote Sensing Images

As increasingly more deep learning models are designed and implemented, the performance of single-view image semantic segmentation is approaching its upper limit. With the increasing availability of multiview satellite images, using multiview information is gaining attention as it can address occlus...

Full description

Saved in:
Bibliographic Details
Main Authors: Xuejun Huang, Yi Wan, Yongjun Zhang, Xinyi Liu, Bin Zhang, Yameng Wang, Haoyu Guo, Yingying Pei, Zhonghua Hu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10932691/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As increasingly more deep learning models are designed and implemented, the performance of single-view image semantic segmentation is approaching its upper limit. With the increasing availability of multiview satellite images, using multiview information is gaining attention as it can address occlusion problems in single-view images and achieve cross-validation to reduce inappropriate segmentation. However, current multiview semantic segmentation methods often rely on multiview voting or require complex preprocessing steps, which may not fully leverage the advantages of multiview images. We analyzed the complementarity and constraints of multiview information and introduced the pyramid spatial deformable aggregation (PSDA) module, a plug-and-play module designed to enhance multiview feature fusion. PSDA is the core component of our early multiview segmentation framework, which facilitates early-stage information fusion by directly extracting features from multiview images, avoiding the complex and time-consuming production of true orthoimages. In this article, we first show how we created the multiview segmentation dataset (MVSeg dataset) using orthoimages generated from different-view images. Then, the results are shown to prove that our method outperformed the corresponding single-view segmentation method, namely by increasing the intersection over union (IoU) metric by approximately 1.23% –3.68% on both datasets. Due to the fusion of multiview images at an early stage, the computational complexity is 0.29–0.74 times that of the state-of-the-art method, and the IoU metric improved by approximately 2.20% –7.52% on both datasets.
ISSN:1939-1404
2151-1535