Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support
Abstract Background Large language models (LLMs) show promise in clinical contexts but can generate false facts (often referred to as “hallucinations”). One subset of these errors arises from adversarial attacks, in which fabricated details embedded in prompts lead the model to produce or elaborate...
Saved in:
| Main Authors: | Mahmud Omar, Vera Sorin, Jeremy D. Collins, David Reich, Robert Freeman, Nicholas Gavin, Alexander Charney, Lisa Stump, Nicola Luigi Bragazzi, Girish N. Nadkarni, Eyal Klang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Communications Medicine |
| Online Access: | https://doi.org/10.1038/s43856-025-01021-3 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Pitfalls of large language models in medical ethics reasoning
by: Shelly Soffer, et al.
Published: (2025-07-01) -
Large language models in medicine: A review of current clinical trials across healthcare applications.
by: Mahmud Omar, et al.
Published: (2024-11-01) -
Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study
by: Mahmud Omar, et al.
Published: (2025-05-01) -
A strategy for cost-effective large language model use at health system-scale
by: Eyal Klang, et al.
Published: (2024-11-01) -
P322: The role of large language models in medical genetics
by: Rona Merdler-Rabinowicz, et al.
Published: (2025-01-01)