Large Language Models lack essential metacognition for reliable medical reasoning
Abstract Large Language Models have demonstrated expert-level accuracy on medical board examinations, suggesting potential for clinical decision support systems. However, their metacognitive abilities, crucial for medical decision-making, remain largely unexplored. To address this gap, we developed...
Saved in:
| Main Authors: | Maxime Griot, Coralie Hemptinne, Jean Vanderdonckt, Demet Yuksel |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-01-01
|
| Series: | Nature Communications |
| Online Access: | https://doi.org/10.1038/s41467-024-55628-6 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Pitfalls of large language models in medical ethics reasoning
by: Shelly Soffer, et al.
Published: (2025-07-01) -
Metacognitive monitoring and metacognitive strategies of gifted and average children on dealing with deductive reasoning task
by: Ondřej Straka, et al.
Published: (2021-09-01) -
The Advanced Reasoning Capabilities of Large Language Models for Detecting Contraindicated Options in Medical Exams
by: Yuichiro Yano, et al.
Published: (2025-05-01) -
Metacognitive States in Language, Communication and Cognition
by: N. K. Ryabtseva
Published: (2017-05-01) -
Recognition and Enforcement of Foreign Judgments Lacking Reasons under Turkish Law
by: Hatice Selin Pürselim Arning, et al.
Published: (2022-06-01)