Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective

Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, chatbots, and others. However, they face a significant challenge: hallucinations, where models produce plausible-sounding but factually incor...

Full description

Saved in:
Bibliographic Details
Main Authors: Ernests Lavrinovics, Russa Biswas, Johannes Bjerva, Katja Hose
Format: Article
Language:English
Published: Elsevier 2025-05-01
Series:Web Semantics
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1570826824000301
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850054024483569664
author Ernests Lavrinovics
Russa Biswas
Johannes Bjerva
Katja Hose
author_facet Ernests Lavrinovics
Russa Biswas
Johannes Bjerva
Katja Hose
author_sort Ernests Lavrinovics
collection DOAJ
description Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, chatbots, and others. However, they face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses. This undermines trust and limits the applicability of LLMs in different domains. Knowledge Graphs (KGs), on the other hand, provide a structured collection of interconnected facts represented as entities (nodes) and their relationships (edges). In recent research, KGs have been leveraged to provide context that can fill gaps in an LLM’s understanding of certain topics offering a promising approach to mitigate hallucinations in LLMs, enhancing their reliability and accuracy while benefiting from their wide applicability. Nonetheless, it is still a very active area of research with various unresolved open problems. In this paper, we discuss these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations. In our discussion, we consider the current use of KGs in LLM systems and identify future directions within each of these challenges.
format Article
id doaj-art-ba2f79c6e7c04e868622af96548d799d
institution DOAJ
issn 1570-8268
language English
publishDate 2025-05-01
publisher Elsevier
record_format Article
series Web Semantics
spelling doaj-art-ba2f79c6e7c04e868622af96548d799d2025-08-20T02:52:23ZengElsevierWeb Semantics1570-82682025-05-018510084410.1016/j.websem.2024.100844Knowledge Graphs, Large Language Models, and Hallucinations: An NLP PerspectiveErnests Lavrinovics0Russa Biswas1Johannes Bjerva2Katja Hose3Department of Computer Science, Aalborg University, Copenhagen, Denmark; Corresponding author.Department of Computer Science, Aalborg University, Copenhagen, DenmarkDepartment of Computer Science, Aalborg University, Copenhagen, DenmarkInstitute of Logic and Computation, TU Wien, Vienna, AustriaLarge Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, chatbots, and others. However, they face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses. This undermines trust and limits the applicability of LLMs in different domains. Knowledge Graphs (KGs), on the other hand, provide a structured collection of interconnected facts represented as entities (nodes) and their relationships (edges). In recent research, KGs have been leveraged to provide context that can fill gaps in an LLM’s understanding of certain topics offering a promising approach to mitigate hallucinations in LLMs, enhancing their reliability and accuracy while benefiting from their wide applicability. Nonetheless, it is still a very active area of research with various unresolved open problems. In this paper, we discuss these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations. In our discussion, we consider the current use of KGs in LLM systems and identify future directions within each of these challenges.http://www.sciencedirect.com/science/article/pii/S1570826824000301LLMFactualityKnowledge GraphsHallucinations
spellingShingle Ernests Lavrinovics
Russa Biswas
Johannes Bjerva
Katja Hose
Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
Web Semantics
LLM
Factuality
Knowledge Graphs
Hallucinations
title Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
title_full Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
title_fullStr Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
title_full_unstemmed Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
title_short Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
title_sort knowledge graphs large language models and hallucinations an nlp perspective
topic LLM
Factuality
Knowledge Graphs
Hallucinations
url http://www.sciencedirect.com/science/article/pii/S1570826824000301
work_keys_str_mv AT ernestslavrinovics knowledgegraphslargelanguagemodelsandhallucinationsannlpperspective
AT russabiswas knowledgegraphslargelanguagemodelsandhallucinationsannlpperspective
AT johannesbjerva knowledgegraphslargelanguagemodelsandhallucinationsannlpperspective
AT katjahose knowledgegraphslargelanguagemodelsandhallucinationsannlpperspective