Can AI match emergency physicians in managing common emergency cases? A comparative performance evaluation
Abstract Background Large language models (LLMs) such as ChatGPT are increasingly explored for clinical decision support. However, their performance in high-stakes emergency scenarios remains underexamined. This study aimed to evaluate ChatGPT’s diagnostic and therapeutic accuracy compared to a boar...
Saved in:
| Main Author: | Mehmet Gün |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-07-01
|
| Series: | BMC Emergency Medicine |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12873-025-01303-y |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Perceptions of large language models in medical education and clinical practice among pediatric emergency physicians in Saudi Arabia: a multiregional cross-sectional study
by: Yara AlGoraini, et al.
Published: (2025-07-01) -
A Clinical Evaluation of Cardiovascular Emergencies: A Comparison of Responses from ChatGPT, Emergency Physicians, and Cardiologists
by: Muhammet Geneş, et al.
Published: (2024-12-01) -
Evaluating the predictive accuracy of ChatGPT in risk stratification for chest pain in the emergency department
by: Fabio Malalan, et al.
Published: (2025-06-01) -
Preliminary evaluation of ChatGPT model iterations in emergency department diagnostics
by: Jinge Wang, et al.
Published: (2025-03-01) -
Navigating the integration of ChatGPT in UAE’s government sector: challenges and opportunities
by: Ghada Nabil Goher
Published: (2025-01-01)