Performance Evaluation and Implications of Large Language Models in Radiology Board Exams: Prospective Comparative Analysis

Abstract BackgroundArtificial intelligence advancements have enabled large language models to significantly impact radiology education and diagnostic accuracy. ObjectiveThis study evaluates the performance of mainstream large language models, including GPT-4, Claud...

Full description

Saved in:
Bibliographic Details
Main Author: Boxiong Wei
Format: Article
Language:English
Published: JMIR Publications 2025-01-01
Series:JMIR Medical Education
Online Access:https://mededu.jmir.org/2025/1/e64284
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract BackgroundArtificial intelligence advancements have enabled large language models to significantly impact radiology education and diagnostic accuracy. ObjectiveThis study evaluates the performance of mainstream large language models, including GPT-4, Claude, Bard, Tongyi Qianwen, and Gemini Pro, in radiology board exams. MethodsA comparative analysis of 150 multiple-choice questions from radiology board exams without images was conducted. Models were assessed on their accuracy for text-based questions and were categorized by cognitive levels and medical specialties using χ2 ResultsGPT-4 achieved the highest accuracy (83.3%, 125/150), significantly outperforming all other models. Specifically, Claude achieved an accuracy of 62% (93/150; PPPPP ConclusionsGPT-4 and Tongyi Qianwen show promise in medical education and training. The study emphasizes the need for domain-specific training datasets to enhance large language models’ effectiveness in specialized fields like radiology.
ISSN:2369-3762