LLM Hallucination: The Curse That Cannot Be Broken

Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon,...

Full description

Saved in:
Bibliographic Details
Main Author: Hussein Al-Mahmood
Format: Article
Language:Arabic
Published: University of Information Technology and Communications 2025-08-01
Series:Iraqi Journal for Computers and Informatics
Subjects:
Online Access:https://ijci.uoitc.edu.iq/index.php/ijci/article/view/546
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849224684973850624
author Hussein Al-Mahmood
author_facet Hussein Al-Mahmood
author_sort Hussein Al-Mahmood
collection DOAJ
description Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon, discussing its different types, the multi-faceted reasons that lead to it, its impact, and the statement regarding the inherent nature of current LLMs that make hallucinations inevitable. After examining several techniques, each chosen for their different implementation, to detect and mitigate hallucinations, including enhanced training, tagged-context prompts, contrastive learning, and semantic entropy analysis, the work concludes that none are efficient to mitigate hallucinations when they occur. The phenomenon is here to stay, hence calling for robust user awareness and verification mechanisms, stepping short of absolute dependence on these models in healthcare, journalism, legal services, finance, and other critical applications that require accurate and reliable information to ensure informed decisions.
format Article
id doaj-art-dfbe68f83b324babb30ee87e28008fcf
institution Kabale University
issn 2313-190X
2520-4912
language Arabic
publishDate 2025-08-01
publisher University of Information Technology and Communications
record_format Article
series Iraqi Journal for Computers and Informatics
spelling doaj-art-dfbe68f83b324babb30ee87e28008fcf2025-08-25T07:15:01ZaraUniversity of Information Technology and CommunicationsIraqi Journal for Computers and Informatics2313-190X2520-49122025-08-01512566910.25195/ijci.v51i2.546509LLM Hallucination: The Curse That Cannot Be BrokenHussein Al-Mahmood0https://orcid.org/0009-0009-5318-1048University of BasrahArtificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon, discussing its different types, the multi-faceted reasons that lead to it, its impact, and the statement regarding the inherent nature of current LLMs that make hallucinations inevitable. After examining several techniques, each chosen for their different implementation, to detect and mitigate hallucinations, including enhanced training, tagged-context prompts, contrastive learning, and semantic entropy analysis, the work concludes that none are efficient to mitigate hallucinations when they occur. The phenomenon is here to stay, hence calling for robust user awareness and verification mechanisms, stepping short of absolute dependence on these models in healthcare, journalism, legal services, finance, and other critical applications that require accurate and reliable information to ensure informed decisions.https://ijci.uoitc.edu.iq/index.php/ijci/article/view/546aiartificial intelligencehallucinationlarge language modelllm
spellingShingle Hussein Al-Mahmood
LLM Hallucination: The Curse That Cannot Be Broken
Iraqi Journal for Computers and Informatics
ai
artificial intelligence
hallucination
large language model
llm
title LLM Hallucination: The Curse That Cannot Be Broken
title_full LLM Hallucination: The Curse That Cannot Be Broken
title_fullStr LLM Hallucination: The Curse That Cannot Be Broken
title_full_unstemmed LLM Hallucination: The Curse That Cannot Be Broken
title_short LLM Hallucination: The Curse That Cannot Be Broken
title_sort llm hallucination the curse that cannot be broken
topic ai
artificial intelligence
hallucination
large language model
llm
url https://ijci.uoitc.edu.iq/index.php/ijci/article/view/546
work_keys_str_mv AT husseinalmahmood llmhallucinationthecursethatcannotbebroken