CausMatch: Causal Matching Learning With Counterfactual Preference Framework for Cross-Modal Retrieval
Cross-modal retrieval exhibits significant promise within the realm of multimedia analysis. Numerous sophisticated techniques have gained widespread adoption for harnessing attention mechanisms to facilitate cross-modal correspondence in matching tasks. However, most existing methods learn cross-mod...
Saved in:
| Main Authors: | Chen Chen, Dan Wang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10843200/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
DCLMA: Deep correlation learning with multi-modal attention for visual-audio retrieval
by: Jiwei Zhang, et al.
Published: (2025-09-01) -
Cross-Modality Consistency Network for Remote Sensing Text-Image Retrieval
by: Yuchen Sha, et al.
Published: (2025-01-01) -
Enhanced-Similarity Attention Fusion for Unsupervised Cross-Modal Hashing Retrieval
by: Mingyong Li, et al.
Published: (2025-01-01) -
A novel deep high-level concept-mining jointing hashing model for unsupervised cross-modal retrieval
by: Chun-Ru Dong, et al.
Published: (2025-06-01) -
Exploring latent weight factors and global information for food-oriented cross-modal retrieval
by: Wenyu Zhao, et al.
Published: (2023-12-01)