Dual-Layer Fusion Knowledge Reasoning with Enhanced Multi-modal Features
Most of the existing multi-modal knowledge reasoning methods use splicing or attention to directly fuse the multi-modal features extracted from the pre-trained model, often ignoring the heterogeneity and interaction complexity between different modes. Therefore, a two-layer fusion knowledge inferenc...
Saved in:
| Main Author: | JING Boxiang, WANG Hairong, WANG Tong, YANG Zhenye |
|---|---|
| Format: | Article |
| Language: | zho |
| Published: |
Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press
2025-02-01
|
| Series: | Jisuanji kexue yu tansuo |
| Subjects: | |
| Online Access: | http://fcst.ceaj.org/fileup/1673-9418/PDF/2312065.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
MERGE: A Modal Equilibrium Relational Graph Framework for Multi-Modal Knowledge Graph Completion
by: Yuying Shang, et al.
Published: (2024-11-01) -
Image First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks
by: Grant Wardle, et al.
Published: (2025-06-01) -
Video anomaly detection via cross-modal fusion and hyperbolic graph attention mechanism
by: JIANG Di, et al.
Published: (2025-06-01) -
An adaptive feature fusion strategy using dual-layer attention and multi-modal deep reinforcement learning for all-media similarity search
by: Jin Yue, et al.
Published: (2025-05-01) -
Multi-modal feature fusion with multi-head self-attention for epileptic EEG signals
by: Ning Huang, et al.
Published: (2024-08-01)