Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations

Purpose: Artificial intelligence advancements have the potential to transform medical education and patient care. The increasing popularity of large language models has raised important questions regarding their accuracy and agreement with human users. The purpose of this study was to evaluate the p...

Full description

Saved in:
Bibliographic Details
Main Authors: Taylor R. Rakauskas, BS, Antonio Da Costa, BS, Camberly Moriconi, BS, Gurnoor Gill, BA, Jeffrey W. Kwong, MD MS, Nicolas Lee, MD
Format: Article
Language:English
Published: Elsevier 2025-01-01
Series:Journal of Hand Surgery Global Online
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2589514124001907
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832586244646764544
author Taylor R. Rakauskas, BS
Antonio Da Costa, BS
Camberly Moriconi, BS
Gurnoor Gill, BA
Jeffrey W. Kwong, MD MS
Nicolas Lee, MD
author_facet Taylor R. Rakauskas, BS
Antonio Da Costa, BS
Camberly Moriconi, BS
Gurnoor Gill, BA
Jeffrey W. Kwong, MD MS
Nicolas Lee, MD
author_sort Taylor R. Rakauskas, BS
collection DOAJ
description Purpose: Artificial intelligence advancements have the potential to transform medical education and patient care. The increasing popularity of large language models has raised important questions regarding their accuracy and agreement with human users. The purpose of this study was to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT), versions 3.5 and 4, as well as Microsoft Copilot, which is powered by ChatGPT-4, on self-assessment examination questions for hand surgery and compare results between versions. Methods: Input included 1,000 questions across 5 years (2015–2019) of self-assessment examinations provided by the American Society of Surgery of the Hand. The primary outcomes included correctness, the percentage concordance relative to other users, and whether an additional prompt was required. Secondary outcomes included accuracy according to question type and difficulty. Results: All question formats including image-based questions were used for the analysis. ChatGPT-3.5 correctly answered 51.6% and ChatGPT-4 correctly answered 63.4%, which was a statistically significant difference. Microsoft Copilot correctly answered 59.9% and outperformed ChatGPT-3.5 but scored significantly lower than ChatGPT-4. However, ChatGPT-3.5 sided with an average of 72.2% users when correct and 62.1% when incorrect, compared to an average of 67.0% and 53.2% users, respectively, for ChatGPT-4. Microsoft Copilot sided with an average of 79.7% users when correct and 52.1% when incorrect. The highest scoring subject was Miscellaneous, and the lowest scoring subject was Neuromuscular in all versions. Conclusions: In this study, ChatGPT-4 and Microsoft Copilot perform better on the hand surgery subspecialty examinations than did ChatGPT-3.5. Microsoft Copilot was more accurate than ChatGPT3.5 but less accurate than ChatGPT4. The ChatGPT-4 and Microsoft Copilot were able to “pass” the 2015–2019 American Society for Surgery of the Hand self-assessment examinations. Clinical Relevance: While holding promise within medical education, caution should be used with large language models as more detailed evaluation of consistency is needed. Future studies should explore how these models perform across multiple trials and contexts to truly assess their reliability.
format Article
id doaj-art-1087dddbc56343398c166b7cef36277e
institution Kabale University
issn 2589-5141
language English
publishDate 2025-01-01
publisher Elsevier
record_format Article
series Journal of Hand Surgery Global Online
spelling doaj-art-1087dddbc56343398c166b7cef36277e2025-01-26T05:04:38ZengElsevierJournal of Hand Surgery Global Online2589-51412025-01-01712328Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment ExaminationsTaylor R. Rakauskas, BS0Antonio Da Costa, BS1Camberly Moriconi, BS2Gurnoor Gill, BA3Jeffrey W. Kwong, MD MS4Nicolas Lee, MD5College of Medicine, Florida Atlantic University, Boca Raton, FLCollege of Medicine, Florida Atlantic University, Boca Raton, FLCollege of Medicine, Florida Atlantic University, Boca Raton, FLCollege of Medicine, Florida Atlantic University, Boca Raton, FLDepartment of Orthopaedic Surgery, University of California San Francisco, San Francisco, CADepartment of Orthopaedic Surgery, University of California San Francisco, San Francisco, CA; Corresponding author: Nicolas Lee, MD, University of California San Francisco, 1500 Owens Street, San Francisco CA 94158.Purpose: Artificial intelligence advancements have the potential to transform medical education and patient care. The increasing popularity of large language models has raised important questions regarding their accuracy and agreement with human users. The purpose of this study was to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT), versions 3.5 and 4, as well as Microsoft Copilot, which is powered by ChatGPT-4, on self-assessment examination questions for hand surgery and compare results between versions. Methods: Input included 1,000 questions across 5 years (2015–2019) of self-assessment examinations provided by the American Society of Surgery of the Hand. The primary outcomes included correctness, the percentage concordance relative to other users, and whether an additional prompt was required. Secondary outcomes included accuracy according to question type and difficulty. Results: All question formats including image-based questions were used for the analysis. ChatGPT-3.5 correctly answered 51.6% and ChatGPT-4 correctly answered 63.4%, which was a statistically significant difference. Microsoft Copilot correctly answered 59.9% and outperformed ChatGPT-3.5 but scored significantly lower than ChatGPT-4. However, ChatGPT-3.5 sided with an average of 72.2% users when correct and 62.1% when incorrect, compared to an average of 67.0% and 53.2% users, respectively, for ChatGPT-4. Microsoft Copilot sided with an average of 79.7% users when correct and 52.1% when incorrect. The highest scoring subject was Miscellaneous, and the lowest scoring subject was Neuromuscular in all versions. Conclusions: In this study, ChatGPT-4 and Microsoft Copilot perform better on the hand surgery subspecialty examinations than did ChatGPT-3.5. Microsoft Copilot was more accurate than ChatGPT3.5 but less accurate than ChatGPT4. The ChatGPT-4 and Microsoft Copilot were able to “pass” the 2015–2019 American Society for Surgery of the Hand self-assessment examinations. Clinical Relevance: While holding promise within medical education, caution should be used with large language models as more detailed evaluation of consistency is needed. Future studies should explore how these models perform across multiple trials and contexts to truly assess their reliability.http://www.sciencedirect.com/science/article/pii/S2589514124001907AILLMChatGPTEducationExaminationCertification
spellingShingle Taylor R. Rakauskas, BS
Antonio Da Costa, BS
Camberly Moriconi, BS
Gurnoor Gill, BA
Jeffrey W. Kwong, MD MS
Nicolas Lee, MD
Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations
Journal of Hand Surgery Global Online
AI
LLM
ChatGPT
Education
Examination
Certification
title Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations
title_full Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations
title_fullStr Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations
title_full_unstemmed Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations
title_short Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations
title_sort evaluation of chat generative pre trained transformer and microsoft copilot performance on the american society of surgery of the hand self assessment examinations
topic AI
LLM
ChatGPT
Education
Examination
Certification
url http://www.sciencedirect.com/science/article/pii/S2589514124001907
work_keys_str_mv AT taylorrrakauskasbs evaluationofchatgenerativepretrainedtransformerandmicrosoftcopilotperformanceontheamericansocietyofsurgeryofthehandselfassessmentexaminations
AT antoniodacostabs evaluationofchatgenerativepretrainedtransformerandmicrosoftcopilotperformanceontheamericansocietyofsurgeryofthehandselfassessmentexaminations
AT camberlymoriconibs evaluationofchatgenerativepretrainedtransformerandmicrosoftcopilotperformanceontheamericansocietyofsurgeryofthehandselfassessmentexaminations
AT gurnoorgillba evaluationofchatgenerativepretrainedtransformerandmicrosoftcopilotperformanceontheamericansocietyofsurgeryofthehandselfassessmentexaminations
AT jeffreywkwongmdms evaluationofchatgenerativepretrainedtransformerandmicrosoftcopilotperformanceontheamericansocietyofsurgeryofthehandselfassessmentexaminations
AT nicolasleemd evaluationofchatgenerativepretrainedtransformerandmicrosoftcopilotperformanceontheamericansocietyofsurgeryofthehandselfassessmentexaminations