Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review

Retrieval-augmented generation (RAG) leverages the strengths of information retrieval and generative models to enhance the handling of real-time and domain-specific knowledge. Despite its advantages, limitations within RAG components may cause hallucinations, or more precisely termed confabulations...

Full description

Saved in:
Bibliographic Details
Main Authors: Wan Zhang, Jing Zhang
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/5/856
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850225248114311168
author Wan Zhang
Jing Zhang
author_facet Wan Zhang
Jing Zhang
author_sort Wan Zhang
collection DOAJ
description Retrieval-augmented generation (RAG) leverages the strengths of information retrieval and generative models to enhance the handling of real-time and domain-specific knowledge. Despite its advantages, limitations within RAG components may cause hallucinations, or more precisely termed confabulations in generated outputs, driving extensive research to address these limitations and mitigate hallucinations. This review focuses on hallucination in retrieval-augmented large language models (LLMs). We first examine the causes of hallucinations from different sub-tasks in the retrieval and generation phases. Then, we provide a comprehensive overview of corresponding hallucination mitigation techniques, offering a targeted and complete framework for addressing hallucinations in retrieval-augmented LLMs. We also investigate methods to reduce the impact of hallucination through detection and correction. Finally, we discuss promising future research directions for mitigating hallucinations in retrieval-augmented LLMs.
format Article
id doaj-art-8ba9ce8fab324025aefc99bc78106712
institution OA Journals
issn 2227-7390
language English
publishDate 2025-03-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj-art-8ba9ce8fab324025aefc99bc781067122025-08-20T02:05:24ZengMDPI AGMathematics2227-73902025-03-0113585610.3390/math13050856Hallucination Mitigation for Retrieval-Augmented Large Language Models: A ReviewWan Zhang0Jing Zhang1School of Cyber Science and Engineering, Southeast University, Nanjing 211189, ChinaSchool of Cyber Science and Engineering, Southeast University, Nanjing 211189, ChinaRetrieval-augmented generation (RAG) leverages the strengths of information retrieval and generative models to enhance the handling of real-time and domain-specific knowledge. Despite its advantages, limitations within RAG components may cause hallucinations, or more precisely termed confabulations in generated outputs, driving extensive research to address these limitations and mitigate hallucinations. This review focuses on hallucination in retrieval-augmented large language models (LLMs). We first examine the causes of hallucinations from different sub-tasks in the retrieval and generation phases. Then, we provide a comprehensive overview of corresponding hallucination mitigation techniques, offering a targeted and complete framework for addressing hallucinations in retrieval-augmented LLMs. We also investigate methods to reduce the impact of hallucination through detection and correction. Finally, we discuss promising future research directions for mitigating hallucinations in retrieval-augmented LLMs.https://www.mdpi.com/2227-7390/13/5/856large language modelshallucinationretrieval-augmented generationhallucination mitigation
spellingShingle Wan Zhang
Jing Zhang
Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
Mathematics
large language models
hallucination
retrieval-augmented generation
hallucination mitigation
title Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
title_full Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
title_fullStr Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
title_full_unstemmed Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
title_short Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
title_sort hallucination mitigation for retrieval augmented large language models a review
topic large language models
hallucination
retrieval-augmented generation
hallucination mitigation
url https://www.mdpi.com/2227-7390/13/5/856
work_keys_str_mv AT wanzhang hallucinationmitigationforretrievalaugmentedlargelanguagemodelsareview
AT jingzhang hallucinationmitigationforretrievalaugmentedlargelanguagemodelsareview