Leveraging modality‐specific and shared features for RGB‐T salient object detection

Abstract Most of the existing RGB‐T salient object detection methods are usually based on dual‐stream encoding single‐stream decoding network architecture. These models always rely on the quality of fusion features, which often focus on modality‐shared features and overlook modality‐specific feature...

Full description

Saved in:
Bibliographic Details
Main Authors: Shuo Wang, Gang Yang, Qiqi Xu, Xun Dai
Format: Article
Language:English
Published: Wiley 2024-12-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/cvi2.12307
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Most of the existing RGB‐T salient object detection methods are usually based on dual‐stream encoding single‐stream decoding network architecture. These models always rely on the quality of fusion features, which often focus on modality‐shared features and overlook modality‐specific features, thus failing to fully utilise the rich information contained in multi‐modality data. To this end, a modality separate tri‐stream net (MSTNet), which consists of a tri‐stream encoding (TSE) structure and a tri‐stream decoding (TSD) structure is proposed. The TSE explicitly separates and extracts the modality‐shared and modality‐specific features to improve the utilisation of multi‐modality data. In addition, based on the hybrid‐attention and cross‐attention mechanism, we design an enhanced complementary fusion module (ECF), which fully considers the complementarity between the features to be fused and realises high‐quality feature fusion. Furthermore, in TSD, the quality of uni‐modality features is ensured under the constraint of supervision. Finally, to make full use of the rich multi‐level and multi‐scale decoding features contained in TSD, the authors design the adaptive multi‐scale decoding module and the multi‐stream feature aggregation module to improve the decoding capability. Extensive experiments on three public datasets show that the MSTNet outperforms 14 state‐of‐the‐art methods, demonstrating that this method can extract and utilise the multi‐modality information more adequately and extract more complete and rich features, thus improving the model's performance. The code will be released at https://github.com/JOOOOKII/MSTNet.
ISSN:1751-9632
1751-9640