An adaptive feature fusion strategy using dual-layer attention and multi-modal deep reinforcement learning for all-media similarity search
Abstract This paper proposes a novel adaptive feature fusion strategy that combines a dual-layer attention mechanism and Multi-modal deep reinforcement learning (DRL) to optimize cross-modal information retrieval. The dual-layer attention mechanism enhances the model's ability to capture deep s...
Saved in:
| Main Authors: | Jin Yue, Jiayun Lang, Rui Feng |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-05-01
|
| Series: | Discover Artificial Intelligence |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44163-025-00332-7 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhanced-Similarity Attention Fusion for Unsupervised Cross-Modal Hashing Retrieval
by: Mingyong Li, et al.
Published: (2025-01-01) -
DCLMA: Deep correlation learning with multi-modal attention for visual-audio retrieval
by: Jiwei Zhang, et al.
Published: (2025-09-01) -
Dual-Layer Fusion Knowledge Reasoning with Enhanced Multi-modal Features
by: JING Boxiang, WANG Hairong, WANG Tong, YANG Zhenye
Published: (2025-02-01) -
Video anomaly detection via cross-modal fusion and hyperbolic graph attention mechanism
by: JIANG Di, et al.
Published: (2025-06-01) -
Hierarchical in-out fusion for incomplete multimodal brain tumor segmentation
by: Fang Liu, et al.
Published: (2025-07-01)