DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans

Multi-Hop Knowledge Graph Question Answering (multi-hop KGQA) aims to obtain answers by analyzing the semantics of natural language questions and performing multi-step reasoning across multiple entities and relations in knowledge graphs. Traditional embedding-based methods map natural language quest...

Full description

Saved in:
Bibliographic Details
Main Authors: Yan Chen, Shuai Sun, Xiaochun Hu
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/12/6722
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849418331723923456
author Yan Chen
Shuai Sun
Xiaochun Hu
author_facet Yan Chen
Shuai Sun
Xiaochun Hu
author_sort Yan Chen
collection DOAJ
description Multi-Hop Knowledge Graph Question Answering (multi-hop KGQA) aims to obtain answers by analyzing the semantics of natural language questions and performing multi-step reasoning across multiple entities and relations in knowledge graphs. Traditional embedding-based methods map natural language questions and knowledge graphs into vector spaces for answer matching through vector operations. While these approaches have improved model performance, they face two critical challenges: the lack of clear interpretability caused by implicit reasoning mechanisms, and the semantic gap between natural language queries and structured knowledge representations. This study proposes the DRKG (Decomposed Reasoning over Knowledge Graph), a constrained multi-hop reasoning framework based on large language models (LLMs) that introduces explicit reasoning plans as logical boundary controllers. The innovation of the DRKG lies in two key aspects: First, the DRKG generates hop-constrained reasoning plans through semantic parsing based on LLMs, explicitly defining the traversal path length and entity-retrieval logic in knowledge graphs. Second, the DRKG conducts selective retrieval during knowledge graph traversal based on these reasoning plans, ensuring faithfulness to structured knowledge. We evaluate the DRKG on four datasets, and the experimental results demonstrate that the DRKG achieves 1%–5% accuracy improvements over the best baseline models. Additional ablation studies verify the effectiveness of explicit reasoning plans in enhancing interpretability while constraining path divergence. A reliability analysis further examines the impact of different parameters combinations on the DRKG’s performance.
format Article
id doaj-art-e39ce72f19d34632a8419d6677ebe07d
institution Kabale University
issn 2076-3417
language English
publishDate 2025-06-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj-art-e39ce72f19d34632a8419d6677ebe07d2025-08-20T03:32:27ZengMDPI AGApplied Sciences2076-34172025-06-011512672210.3390/app15126722DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning PlansYan Chen0Shuai Sun1Xiaochun Hu2School of Computer and Electronic Information, Guangxi University, Nanning 530004, ChinaSchool of Computer and Electronic Information, Guangxi University, Nanning 530004, ChinaGuangxi Key Laboratory of Finance and Economics Big Data, Nanning 530007, ChinaMulti-Hop Knowledge Graph Question Answering (multi-hop KGQA) aims to obtain answers by analyzing the semantics of natural language questions and performing multi-step reasoning across multiple entities and relations in knowledge graphs. Traditional embedding-based methods map natural language questions and knowledge graphs into vector spaces for answer matching through vector operations. While these approaches have improved model performance, they face two critical challenges: the lack of clear interpretability caused by implicit reasoning mechanisms, and the semantic gap between natural language queries and structured knowledge representations. This study proposes the DRKG (Decomposed Reasoning over Knowledge Graph), a constrained multi-hop reasoning framework based on large language models (LLMs) that introduces explicit reasoning plans as logical boundary controllers. The innovation of the DRKG lies in two key aspects: First, the DRKG generates hop-constrained reasoning plans through semantic parsing based on LLMs, explicitly defining the traversal path length and entity-retrieval logic in knowledge graphs. Second, the DRKG conducts selective retrieval during knowledge graph traversal based on these reasoning plans, ensuring faithfulness to structured knowledge. We evaluate the DRKG on four datasets, and the experimental results demonstrate that the DRKG achieves 1%–5% accuracy improvements over the best baseline models. Additional ablation studies verify the effectiveness of explicit reasoning plans in enhancing interpretability while constraining path divergence. A reliability analysis further examines the impact of different parameters combinations on the DRKG’s performance.https://www.mdpi.com/2076-3417/15/12/6722knowledge graphlarge language modelsMulti-Hop Knowledge Graph Question Answeringnatural language processing
spellingShingle Yan Chen
Shuai Sun
Xiaochun Hu
DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans
Applied Sciences
knowledge graph
large language models
Multi-Hop Knowledge Graph Question Answering
natural language processing
title DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans
title_full DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans
title_fullStr DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans
title_full_unstemmed DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans
title_short DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans
title_sort drkg faithful and interpretable multi hop knowledge graph question answering via llm guided reasoning plans
topic knowledge graph
large language models
Multi-Hop Knowledge Graph Question Answering
natural language processing
url https://www.mdpi.com/2076-3417/15/12/6722
work_keys_str_mv AT yanchen drkgfaithfulandinterpretablemultihopknowledgegraphquestionansweringviallmguidedreasoningplans
AT shuaisun drkgfaithfulandinterpretablemultihopknowledgegraphquestionansweringviallmguidedreasoningplans
AT xiaochunhu drkgfaithfulandinterpretablemultihopknowledgegraphquestionansweringviallmguidedreasoningplans