Diagnostic Performance of Publicly Available Large Language Models in Corneal Diseases: A Comparison with Human Specialists

<b>Background/Objectives:</b> This study evaluated the diagnostic accuracy of seven publicly available large language models (LLMs)—GPT-3.5, GPT-4.o Mini, GPT-4.o, Gemini 1.5 Flash, Claude 3.5 Sonnet, Grok3, and DeepSeek R1—in diagnosing corneal diseases, comparing their performance to h...

Full description

Saved in:
Bibliographic Details
Main Authors: Cheng Jiao, Erik Rosas, Hassan Asadigandomani, Mohammad Delsoz, Yeganeh Madadi, Hina Raja, Wuqaas M. Munir, Brendan Tamm, Shiva Mehravaran, Ali R. Djalilian, Siamak Yousefi, Mohammad Soleimani
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Diagnostics
Subjects:
Online Access:https://www.mdpi.com/2075-4418/15/10/1221
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<b>Background/Objectives:</b> This study evaluated the diagnostic accuracy of seven publicly available large language models (LLMs)—GPT-3.5, GPT-4.o Mini, GPT-4.o, Gemini 1.5 Flash, Claude 3.5 Sonnet, Grok3, and DeepSeek R1—in diagnosing corneal diseases, comparing their performance to human specialists. <b>Methods:</b> Twenty corneal disease cases from the University of Iowa’s EyeRounds were presented to each LLM. Diagnostic accuracy was determined by comparing LLM-generated diagnoses to the confirmed case diagnoses. Four human cornea specialists evaluated the same cases to establish a benchmark and assess interobserver agreement. <b>Results:</b> Diagnostic accuracy varied significantly among LLMs (<i>p</i> = 0.001). GPT-4.o achieved the highest accuracy (80.0%), followed by Claude 3.5 Sonnet and Grok3 (70.0%), DeepSeek R1 (65.0%), GPT-3.5 (60.0%), GPT-4.o Mini (55.0%), and Gemini 1.5 Flash (30.0%). Human experts averaged 92.5% accuracy, outperforming all LLMs (<i>p</i> < 0.001, Cohen’s d = −1.314). GPT-4.o showed no significant difference from human consensus (<i>p</i> = 0.250, κ = 0.348), while Claude and Grok3 showed fair agreement (κ = 0.219). DeepSeek R1 also performed reasonably (κ = 0.178), although not significantly. <b>Conclusions:</b> Among the evaluated LLMs, GPT-4.o, Claude 3.5 Sonnet, Grok3, and DeepSeek R1 demonstrated promising diagnostic accuracy, with GPT-4.o most closely matching human performance. However, performance remained inconsistent, especially in complex cases. LLMs may offer value as diagnostic support tools, but human expertise remains indispensable for clinical decision-making.
ISSN:2075-4418