High-quality one-shot interactive segmentation for remote sensing images via hybrid adapter-enhanced foundation models

Interactive segmentation of remote sensing images enables the rapid generation of annotated samples, providing training samples for deep learning algorithms and facilitating high-quality extraction and classification for remote sensing objects. However, existing interactive segmentation methods, suc...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhili Zhang, Xiangyun Hu, Yue Yang, Bingnan Yang, Kai Deng, Hengming Dai, Mi Zhang
Format: Article
Language:English
Published: Elsevier 2025-05-01
Series:International Journal of Applied Earth Observations and Geoinformation
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S156984322500113X
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Interactive segmentation of remote sensing images enables the rapid generation of annotated samples, providing training samples for deep learning algorithms and facilitating high-quality extraction and classification for remote sensing objects. However, existing interactive segmentation methods, such as SAM, are primarily designed for natural images and show inefficiencies when applied to remote sensing images. These methods often require multiple interactions to achieve satisfactory labeling results and frequently struggle to obtain precise target boundaries. To address these limitations, we propose a high-quality one-shot interactive segmentation method (OSISeg) based on the fine-tuning of foundation models, tailored for the efficient annotation of typical objects in remote sensing imagery. OSISeg utilizes robust visual priors from foundation models and implements a hybrid adapter-based strategy for fine-tuning these models. Specifically, It employs a parallel structure with hybrid adapter designs to adjust multi-head self-attention and feed-forward neural networks within foundation models, effectively aligning remote sensing image features for interactive segmentation tasks. Furthermore, the proposed OSISeg integrates point, box, and scribble prompts, facilitating high-quality segmentation only using one prompt through a lightweight decoder. Experimental results on multiple datasets—including buildings, water bodies, and woodlands—demonstrate that our method outperforms existing fine-tuning methods and significantly enhances the quality of one-shot interactive segmentation for typical remote sensing objects. This study highlights the potential of the proposed OSISeg to significantly accelerate sample annotation in remote sensing image labeling tasks, establishing it as a valuable tool for sample labeling in the field of remote sensing. Code is available at https://github.com/zhilyzhang/OSISeg.
ISSN:1569-8432