Pitfalls of large language models in medical ethics reasoning
Large language models (LLMs), such as ChatGPT-o1, display subtle blind spots in complex reasoning tasks. We illustrate these pitfalls with lateral thinking puzzles and medical ethics scenarios. Our observations indicate that patterns in training data may contribute to cognitive biases, limiting the...
Saved in:
| Main Authors: | Shelly Soffer, Vera Sorin, Girish N. Nadkarni, Eyal Klang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01792-y |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
P322: The role of large language models in medical genetics
by: Rona Merdler-Rabinowicz, et al.
Published: (2025-01-01) -
Large language models in medicine: A review of current clinical trials across healthcare applications.
by: Mahmud Omar, et al.
Published: (2024-11-01) -
Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study
by: Mahmud Omar, et al.
Published: (2025-05-01) -
Predictive machine-learning model for screening iron deficiency without anaemia: a retrospective cohort study
by: Girish N Nadkarni, et al.
Published: (2025-08-01) -
Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support
by: Mahmud Omar, et al.
Published: (2025-08-01)