You believe your LLM is not delusional? Think again! a study of LLM hallucination on foundation models under perturbation

Abstract Large Language Model (LLM) has recently become almost a household term because of its wide range of applications and immense popularity. However, hallucination in LLMs is a critical issue as it affects the quality of an LLM’s response, reduces user trust and leads to the spread of misinform...

Full description

Saved in:
Bibliographic Details
Main Authors: Anirban Saha, Binay Gupta, Anirban Chatterjee, Kunal Banerjee
Format: Article
Language:English
Published: Springer 2025-05-01
Series:Discover Data
Subjects:
Online Access:https://doi.org/10.1007/s44248-025-00041-7
Tags: Add Tag
No Tags, Be the first to tag this record!