Comparative analysis of large language models on rare disease identification

Abstract Diagnosing rare diseases is challenging due to their low prevalence, diverse presentations, and limited recognition, often leading to diagnostic delays and errors. This study evaluates the effectiveness of multiple large language models (LLMs) in identifying rare diseases, comparing their p...

Full description

Saved in:
Bibliographic Details
Main Authors: Guangyu Ao, Min Chen, Jing Li, Huibing Nie, Lei Zhang, Zejun Chen
Format: Article
Language:English
Published: BMC 2025-04-01
Series:Orphanet Journal of Rare Diseases
Subjects:
Online Access:https://doi.org/10.1186/s13023-025-03656-w
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Diagnosing rare diseases is challenging due to their low prevalence, diverse presentations, and limited recognition, often leading to diagnostic delays and errors. This study evaluates the effectiveness of multiple large language models (LLMs) in identifying rare diseases, comparing their performance with that of human physicians using real clinical cases. We analyzed 152 rare disease cases from the Chinese Medical Case Repository using four LLMs: ChatGPT-4o, Claude 3.5 Sonnet, Gemini Advanced, and Llama 3.1 405B. Overall, the LLMs performed better than human physicians, and Claude 3.5 Sonnet achieved the highest accuracy at 78.9%, significantly surpassing the accuracy of human physicians, which was 26.3%. These findings suggest that LLMs can improve rare disease diagnosis and serve as valuable tools in clinical settings, particularly in regions with limited resources. However, further validation and careful consideration of ethical and privacy issues are necessary for their effective integration into medical practice.
ISSN:1750-1172