”My AI is Lying to Me”: User-reported LLM hallucinations in AI mobile apps reviews
Abstract Large Language Models (LLMs) are increasingly integrated into AI-powered mobile applications, offering novel functionalities but also introducing the risk of “hallucinations” generating plausible yet incorrect or nonsensical information. These AI errors can significantly degrade user experi...
Saved in:
| Main Authors: | Rhodes Massenon, Ishaya Gambo, Javed Ali Khan, Christopher Agbonkhese, Ayed Alwadain |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-15416-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Mobile app review analysis for crowdsourcing of software requirements: a mapping study of automated and semi-automated tools
by: Rhodes Massenon, et al.
Published: (2024-11-01) -
LLM Hallucination: The Curse That Cannot Be Broken
by: Hussein Al-Mahmood
Published: (2025-08-01) -
Use me wisely: AI-driven assessment for LLM prompting skills development
by: Dimitri Ognibene, Gregor Donabauer, Emily Theophilou, Cansu Koyuturk, Mona Yavari, Sathya Bursic, Alessia Telari, Alessia Testa, Raffaele Boiano, Davide Taibi, Davinia Hernandez-Leo, Udo Kruschwitz and Martin Ruskov
Published: (2025-07-01) -
EFFECTS OF AI HALLUCINATIONS ON MILITARY SYSTEMS
by: Teodor FRUNZETI, et al.
Published: (2025-07-01) -
Understanding the impact of AI Hallucinations on the university community
by: Hend Kamel
Published: (2024-12-01)