LLM Hallucination: The Curse That Cannot Be Broken
Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon,...
Saved in:
| Main Author: | Hussein Al-Mahmood |
|---|---|
| Format: | Article |
| Language: | Arabic |
| Published: |
University of Information Technology and Communications
2025-08-01
|
| Series: | Iraqi Journal for Computers and Informatics |
| Subjects: | |
| Online Access: | https://ijci.uoitc.edu.iq/index.php/ijci/article/view/546 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
You believe your LLM is not delusional? Think again! a study of LLM hallucination on foundation models under perturbation
by: Anirban Saha, et al.
Published: (2025-05-01) -
Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations
by: Sangyeon Yu, et al.
Published: (2025-05-01) -
Chinese Chat Room: AI Hallucinations, Epistemology and Cognition
by: Šekrst Kristina
Published: (2024-12-01) -
Understanding the impact of AI Hallucinations on the university community
by: Hend Kamel
Published: (2024-12-01) -
Research on Categorical Recognition and Optimization of Hallucination Phenomenon in Large Language Models
by: HE Jing, SHEN Yang, XIE Runfeng
Published: (2025-05-01)