GLR: Graph Chain-of-Thought with LoRA Fine-Tuning and Confidence Ranking for Knowledge Graph Completion
In knowledge graph construction, missing facts often lead to incomplete structures, thereby limiting the performance of downstream applications. Although recent knowledge graph completion (KGC) methods based on representation learning have achieved notable progress, they still suffer from two fundam...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/13/7282 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In knowledge graph construction, missing facts often lead to incomplete structures, thereby limiting the performance of downstream applications. Although recent knowledge graph completion (KGC) methods based on representation learning have achieved notable progress, they still suffer from two fundamental limitations, namely the lack of structured reasoning capabilities and the inability to assess the confidence of their predictions, which often results in unreliable outputs. We propose the GLR framework, which integrates Graph Chain-of-Thought (Graph-CoT) reasoning, LoRA fine-tuning, and the P(True)-based confidence evaluation mechanism. In the KGC task, this approach effectively enhances the reasoning ability and prediction reliability of large language models (LLMs). Specifically, Graph-CoT introduces local subgraph structures to guide LLMs in performing graph-constrained, step-wise reasoning, improving their ability to model multi-hop relational patterns. Complementing this, LoRA-based fine-tuning enables efficient adaptation of LLMs to the KGC scenario with minimal computational overhead, further enhancing the model’s capability for graph-structured reasoning. Moreover, the P(True) mechanism quantifies the reliability of candidate entities, improving the robustness of ranking and the controllability of outputs, thereby enhancing the credibility and interpretability of model predictions in knowledge reasoning tasks. We conducted systematic experiments on the standard KGC datasets FB15K-237, WN18RR, and UMLS, which demonstrate the effectiveness and robustness of the GLR framework. Notably, GLR achieves a Mean Reciprocal Rank (MRR) of 0.507 on FB15K-237, marking a 6.8% improvement over the best recent instruction-tuned method, DIFT combined with CoLE (MRR = 0.439). GLR also maintains significant performance advantages on WN18RR and UMLS, verifying its effectiveness in enhancing both the structured reasoning capabilities and the prediction reliability of LLMs for KGC tasks. These results indicate that GLR offers a unified and scalable solution to enhance structure-aware reasoning and output reliability of LLMs in KGC. |
|---|---|
| ISSN: | 2076-3417 |