You believe your LLM is not delusional? Think again! a study of LLM hallucination on foundation models under perturbation
Abstract Large Language Model (LLM) has recently become almost a household term because of its wide range of applications and immense popularity. However, hallucination in LLMs is a critical issue as it affects the quality of an LLM’s response, reduces user trust and leads to the spread of misinform...
Saved in:
| Main Authors: | Anirban Saha, Binay Gupta, Anirban Chatterjee, Kunal Banerjee |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-05-01
|
| Series: | Discover Data |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44248-025-00041-7 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
LLM Hallucination: The Curse That Cannot Be Broken
by: Hussein Al-Mahmood
Published: (2025-08-01) -
LLM technologies and information search
by: Lin Liu, et al.
Published: (2024-11-01) -
Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
by: Ernests Lavrinovics, et al.
Published: (2025-05-01) -
A model of ensuring LLM cybersecurity
by: Oleksii Neretin, et al.
Published: (2025-05-01) -
Moving LLM evaluation forward: lessons from human judgment research
by: Andrea Polonioli
Published: (2025-05-01)