Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
Retrieval-augmented generation (RAG) leverages the strengths of information retrieval and generative models to enhance the handling of real-time and domain-specific knowledge. Despite its advantages, limitations within RAG components may cause hallucinations, or more precisely termed confabulations...
Saved in:
| Main Authors: | Wan Zhang, Jing Zhang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Mathematics |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2227-7390/13/5/856 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Reducing hallucinations of large language models via hierarchical semantic piece
by: Yanyi Liu, et al.
Published: (2025-04-01) -
Research on Categorical Recognition and Optimization of Hallucination Phenomenon in Large Language Models
by: HE Jing, SHEN Yang, XIE Runfeng
Published: (2025-05-01) -
A scalable framework for evaluating multiple language models through cross-domain generation and hallucination detection
by: Sorup Chakraborty, et al.
Published: (2025-08-01) -
Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations
by: Sangyeon Yu, et al.
Published: (2025-05-01) -
LLM Hallucination: The Curse That Cannot Be Broken
by: Hussein Al-Mahmood
Published: (2025-08-01)