Leveraging modality‐specific and shared features for RGB‐T salient object detection
Abstract Most of the existing RGB‐T salient object detection methods are usually based on dual‐stream encoding single‐stream decoding network architecture. These models always rely on the quality of fusion features, which often focus on modality‐shared features and overlook modality‐specific feature...
Saved in:
| Main Authors: | Shuo Wang, Gang Yang, Qiqi Xu, Xun Dai |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2024-12-01
|
| Series: | IET Computer Vision |
| Subjects: | |
| Online Access: | https://doi.org/10.1049/cvi2.12307 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Coordinate Attention Filtering Depth-Feature Guide Cross-Modal Fusion RGB-Depth Salient Object Detection
by: Lingbing Meng, et al.
Published: (2023-01-01) -
TCAINet an RGB T salient object detection model with cross modal fusion and adaptive decoding
by: Hong Peng, et al.
Published: (2025-04-01) -
Edge-guided feature fusion network for RGB-T salient object detection
by: Yuanlin Chen, et al.
Published: (2024-12-01) -
Cross-modal interactive and global awareness fusion network for RGB-D salient object detection.
by: Runqing Li, et al.
Published: (2025-01-01) -
Cross-modal interactive and global awareness fusion network for RGB-D salient object detection
by: Runqing Li, et al.
Published: (2025-01-01)