Distillation and Supplementation of Features for Referring Image Segmentation
Referring Image Segmentation (RIS) aims to accurately match specific instance objects in an input image with natural language expressions and generate corresponding pixel-level segmentation masks. Existing methods typically obtain multi-modal features by fusing linguistic features with visual featur...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10745233/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Referring Image Segmentation (RIS) aims to accurately match specific instance objects in an input image with natural language expressions and generate corresponding pixel-level segmentation masks. Existing methods typically obtain multi-modal features by fusing linguistic features with visual features, which are fed into a mask decoder to generate segmentation masks. However, these methods ignore interfering noise in the multi-modal features that will adversely affect the generation of the target segmentation masks. In addition, the vast majority of current RIS models incorporate only a residual structure derived from a block within the Transformer model. The limitations of this information propagation approach hinder the stratification of the model structure, consequently affecting the training efficacy of the model. In this paper, we propose a RIS method called DSFRIS, which combines the knowledge of sparse reconstruction and employs a novel training mechanism in the process of training the decoder. Specifically, we propose a feature distillation mechanism for the multi-modal feature fusion stage and a feature supplementation mechanism for the mask decoder training process, which are two novel mechanisms for reducing the noise information in the multi-modal fusion features and enriching the feature information in the decoder training process, respectively. Through extensive experiments on three widely used RIS benchmark datasets, we demonstrate the state-of-the-art performance of our proposed method. |
|---|---|
| ISSN: | 2169-3536 |