AI versus human-generated multiple-choice questions for medical education: a cohort study in a high-stakes examination
Abstract Background The creation of high-quality multiple-choice questions (MCQs) is essential for medical education assessments but is resource-intensive and time-consuming when done by human experts. Large language models (LLMs) like ChatGPT-4o offer a promising alternative, but their efficacy rem...
Saved in:
Main Authors: | Alex KK Law, Jerome So, Chun Tat Lui, Yu Fai Choi, Koon Ho Cheung, Kevin Kei-ching Hung, Colin Alexander Graham |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2025-02-01
|
Series: | BMC Medical Education |
Subjects: | |
Online Access: | https://doi.org/10.1186/s12909-025-06796-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Configuring parents as citizens and consumers: local variations in informational material about school allocation and choice in Sweden
by: Hanna Sjögren, et al.
Published: (2025-02-01) -
Multiple Choice Questions in Anaesthesia : basic sciences /
by: Kumar, Bakul
Published: (1992) -
Exploring situated expectancy-value theory: A study of gendered higher education choices
by: Pikić-Jugović Ivana, et al.
Published: (2024-01-01) -
Using Large Scale Summative Tests to Advance the Integration of Sustainable Development Competencies: A Model for Test Construction
by: Williams-McBean Clavia T.
Published: (2024-12-01) -
THE LANGUAGE CHOICE OF CHINESE COMMUNITY IN MEDAN: A SOCIOLINGUISTICS STUDY
by: Vivi Adryani Nasution, et al.
Published: (2020-02-01)