MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation
Knowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have all...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Mathematics |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2227-7390/13/9/1463 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850032146886950912 |
|---|---|
| author | Pengfei Yue Hailiang Tang Wanyu Li Wenxiao Zhang Bingjie Yan |
| author_facet | Pengfei Yue Hailiang Tang Wanyu Li Wenxiao Zhang Bingjie Yan |
| author_sort | Pengfei Yue |
| collection | DOAJ |
| description | Knowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have alleviated these issues to some extent, they often rely heavily on extensive training and fine-tuning, which results in low efficiency. To tackle these challenges, we introduce our MLKGC framework, a novel approach that combines large language models (LLMs) with multi-modal modules (MMs). LLMs leverage their advanced language understanding and reasoning abilities to enrich the contextual information for KGC, while MMs integrate multi-modal data, such as audio and images, to bridge knowledge gaps. This integration augments the capability of the model to address long-tail entities, enhances its reasoning processes, and facilitates more robust information integration through the incorporation of diverse inputs. By harnessing the synergy between LLMs and MMs, our approach reduces dependence on traditional text-based training and fine-tuning, providing a more efficient and accurate solution for KGC tasks. It also offers greater flexibility in addressing complex relationships and diverse entities. Extensive experiments on multiple benchmark KGC datasets demonstrate that MLKGC effectively leverages the strengths of both LLMs and multi-modal data, achieving superior performance in link-prediction tasks. |
| format | Article |
| id | doaj-art-e970d6ded3434387a62b3b3fab41a830 |
| institution | DOAJ |
| issn | 2227-7390 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Mathematics |
| spelling | doaj-art-e970d6ded3434387a62b3b3fab41a8302025-08-20T02:58:44ZengMDPI AGMathematics2227-73902025-04-01139146310.3390/math13091463MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal AugmentationPengfei Yue0Hailiang Tang1Wanyu Li2Wenxiao Zhang3Bingjie Yan4School of Information Science and Engineering, Qilu Normal University, Jinan 250200, ChinaSchool of Information Science and Engineering, Qilu Normal University, Jinan 250200, ChinaSchool of Humanities, Arts, and Social Sciences, Kunsan National University, Gunsan 54150, Republic of KoreaSchool of Computer Science and Engineering, Kunsan National University, Gunsan 54150, Republic of KoreaSchool of Mathematics, High School Attached to Shandong Normal University, Jinan 250200, ChinaKnowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have alleviated these issues to some extent, they often rely heavily on extensive training and fine-tuning, which results in low efficiency. To tackle these challenges, we introduce our MLKGC framework, a novel approach that combines large language models (LLMs) with multi-modal modules (MMs). LLMs leverage their advanced language understanding and reasoning abilities to enrich the contextual information for KGC, while MMs integrate multi-modal data, such as audio and images, to bridge knowledge gaps. This integration augments the capability of the model to address long-tail entities, enhances its reasoning processes, and facilitates more robust information integration through the incorporation of diverse inputs. By harnessing the synergy between LLMs and MMs, our approach reduces dependence on traditional text-based training and fine-tuning, providing a more efficient and accurate solution for KGC tasks. It also offers greater flexibility in addressing complex relationships and diverse entities. Extensive experiments on multiple benchmark KGC datasets demonstrate that MLKGC effectively leverages the strengths of both LLMs and multi-modal data, achieving superior performance in link-prediction tasks.https://www.mdpi.com/2227-7390/13/9/1463large language modelsmulti-modal moduleknowledge graph completion |
| spellingShingle | Pengfei Yue Hailiang Tang Wanyu Li Wenxiao Zhang Bingjie Yan MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation Mathematics large language models multi-modal module knowledge graph completion |
| title | MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation |
| title_full | MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation |
| title_fullStr | MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation |
| title_full_unstemmed | MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation |
| title_short | MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation |
| title_sort | mlkgc large language models for knowledge graph completion under multimodal augmentation |
| topic | large language models multi-modal module knowledge graph completion |
| url | https://www.mdpi.com/2227-7390/13/9/1463 |
| work_keys_str_mv | AT pengfeiyue mlkgclargelanguagemodelsforknowledgegraphcompletionundermultimodalaugmentation AT hailiangtang mlkgclargelanguagemodelsforknowledgegraphcompletionundermultimodalaugmentation AT wanyuli mlkgclargelanguagemodelsforknowledgegraphcompletionundermultimodalaugmentation AT wenxiaozhang mlkgclargelanguagemodelsforknowledgegraphcompletionundermultimodalaugmentation AT bingjieyan mlkgclargelanguagemodelsforknowledgegraphcompletionundermultimodalaugmentation |