ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).

<h4>Introduction</h4>Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use...

Full description

Saved in:
Bibliographic Details
Main Authors: Billy Ho Hung Cheung, Gary Kui Kai Lau, Gordon Tin Chun Wong, Elaine Yuen Phin Lee, Dhananjay Kulkarni, Choon Sheong Seow, Ruby Wong, Michael Tiong-Hong Co
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2023-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0290691
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849429969673912320
author Billy Ho Hung Cheung
Gary Kui Kai Lau
Gordon Tin Chun Wong
Elaine Yuen Phin Lee
Dhananjay Kulkarni
Choon Sheong Seow
Ruby Wong
Michael Tiong-Hong Co
author_facet Billy Ho Hung Cheung
Gary Kui Kai Lau
Gordon Tin Chun Wong
Elaine Yuen Phin Lee
Dhananjay Kulkarni
Choon Sheong Seow
Ruby Wong
Michael Tiong-Hong Co
author_sort Billy Ho Hung Cheung
collection DOAJ
description <h4>Introduction</h4>Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.<h4>Methods</h4>50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.<h4>Results</h4>The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.<h4>Conclusion</h4>ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.
format Article
id doaj-art-8792cfb7aea6492f8cfc1e117d14c7a0
institution Kabale University
issn 1932-6203
language English
publishDate 2023-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-8792cfb7aea6492f8cfc1e117d14c7a02025-08-20T03:28:10ZengPublic Library of Science (PLoS)PLoS ONE1932-62032023-01-01188e029069110.1371/journal.pone.0290691ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).Billy Ho Hung CheungGary Kui Kai LauGordon Tin Chun WongElaine Yuen Phin LeeDhananjay KulkarniChoon Sheong SeowRuby WongMichael Tiong-Hong Co<h4>Introduction</h4>Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.<h4>Methods</h4>50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.<h4>Results</h4>The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.<h4>Conclusion</h4>ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.https://doi.org/10.1371/journal.pone.0290691
spellingShingle Billy Ho Hung Cheung
Gary Kui Kai Lau
Gordon Tin Chun Wong
Elaine Yuen Phin Lee
Dhananjay Kulkarni
Choon Sheong Seow
Ruby Wong
Michael Tiong-Hong Co
ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).
PLoS ONE
title ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).
title_full ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).
title_fullStr ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).
title_full_unstemmed ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).
title_short ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).
title_sort chatgpt versus human in generating medical graduate exam multiple choice questions a multinational prospective study hong kong s a r singapore ireland and the united kingdom
url https://doi.org/10.1371/journal.pone.0290691
work_keys_str_mv AT billyhohungcheung chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom
AT garykuikailau chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom
AT gordontinchunwong chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom
AT elaineyuenphinlee chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom
AT dhananjaykulkarni chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom
AT choonsheongseow chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom
AT rubywong chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom
AT michaeltionghongco chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom