Research on Categorical Recognition and Optimization of Hallucination Phenomenon in Large Language Models
With the widespread application of big language models in natural language understanding and generation tasks, their performance in high-precision fields such as healthcare, law, and scientific research has received increasing attention. However, the phenomenon of hallucinations, as a common problem...
Saved in:
| Main Author: | HE Jing, SHEN Yang, XIE Runfeng |
|---|---|
| Format: | Article |
| Language: | zho |
| Published: |
Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press
2025-05-01
|
| Series: | Jisuanji kexue yu tansuo |
| Subjects: | |
| Online Access: | http://fcst.ceaj.org/fileup/1673-9418/PDF/2408080.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
by: Wan Zhang, et al.
Published: (2025-03-01) -
Reducing hallucinations of large language models via hierarchical semantic piece
by: Yanyi Liu, et al.
Published: (2025-04-01) -
LLM Hallucination: The Curse That Cannot Be Broken
by: Hussein Al-Mahmood
Published: (2025-08-01) -
Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations
by: Sangyeon Yu, et al.
Published: (2025-05-01) -
Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
by: Ernests Lavrinovics, et al.
Published: (2025-05-01)