MKER: multi-modal knowledge extraction and reasoning for future event prediction
Abstract Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, interna...
Saved in:
Main Authors: | Chenghang Lai, Shoumeng Qiu |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2025-01-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01741-4 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Audio-visual event localization with dual temporal-aware scene understanding and image-text knowledge bridging
by: Pufen Zhang, et al.
Published: (2024-11-01) -
Medical Knowledge Graph: Data Sources, Construction, Reasoning, and Applications
by: Xuehong Wu, et al.
Published: (2023-06-01) -
Hume’s “Of scepticism with regard to reason” and the Degeneration of Knowledge in Practice
by: Benjamin Nelson
Published: (2024-03-01) -
HGeoKG: A Hierarchical Geographic Knowledge Graph for Geographic Knowledge Reasoning
by: Tailong Li, et al.
Published: (2025-01-01) -
A temporal knowledge graph reasoning model based on recurrent encoding and contrastive learning
by: Weitong Liu, et al.
Published: (2025-01-01)