Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework

Objective Large language models (LLMs) such as ChatGPT are being developed for use in research, medical education and clinical decision systems. However, as their usage increases, LLMs face ongoing regulatory concerns. This study aims to analyse ChatGPT’s performance on a postgraduate examination to...

Full description

Saved in:
Bibliographic Details
Main Authors: Stuart Maitland, Ross Fowkes, Amy Maitland
Format: Article
Language:English
Published: BMJ Publishing Group 2024-03-01
Series:BMJ Open
Online Access:https://bmjopen.bmj.com/content/14/3/e080558.full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849717271926145024
author Stuart Maitland
Ross Fowkes
Amy Maitland
author_facet Stuart Maitland
Ross Fowkes
Amy Maitland
author_sort Stuart Maitland
collection DOAJ
description Objective Large language models (LLMs) such as ChatGPT are being developed for use in research, medical education and clinical decision systems. However, as their usage increases, LLMs face ongoing regulatory concerns. This study aims to analyse ChatGPT’s performance on a postgraduate examination to identify areas of strength and weakness, which may provide further insight into their role in healthcare.Design We evaluated the performance of ChatGPT 4 (24 May 2023 version) on official MRCP (Membership of the Royal College of Physicians) parts 1 and 2 written examination practice questions. Statistical analysis was performed using Python. Spearman rank correlation assessed the relationship between the probability of correctly answering a question and two variables: question difficulty and question length. Incorrectly answered questions were analysed further using a clinical reasoning framework to assess the errors made.Setting Online using ChatGPT web interface.Primary and secondary outcome measures Primary outcome was the score (percentage questions correct) in the MRCP postgraduate written examinations. Secondary outcomes were qualitative categorisation of errors using a clinical decision-making framework.Results ChatGPT achieved accuracy rates of 86.3% (part 1) and 70.3% (part 2). Weak but significant correlations were found between ChatGPT’s accuracy and both just-passing rates in part 2 (r=0.34, p=0.0001) and question length in part 1 (r=−0.19, p=0.008). Eight types of error were identified, with the most frequent being factual errors, context errors and omission errors.Conclusion ChatGPT performance greatly exceeded the passing mark for both exams. Multiple choice examinations provide a benchmark for LLM performance which is comparable to human demonstrations of knowledge, while also highlighting the errors LLMs make. Understanding the reasons behind ChatGPT’s errors allows us to develop strategies to prevent them in medical devices that incorporate LLM technology.
format Article
id doaj-art-6da4ecee682d4665a49091111f3d3fa7
institution DOAJ
issn 2044-6055
language English
publishDate 2024-03-01
publisher BMJ Publishing Group
record_format Article
series BMJ Open
spelling doaj-art-6da4ecee682d4665a49091111f3d3fa72025-08-20T03:12:42ZengBMJ Publishing GroupBMJ Open2044-60552024-03-0114310.1136/bmjopen-2023-080558Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning frameworkStuart Maitland0Ross Fowkes1Amy Maitland2Translational and Clinical Research Institute, Newcastle University Faculty of Medical Sciences, Newcastle upon Tyne, UKHealth Education England North East, Newcastle upon Tyne, UKHealth Education England North East, Newcastle upon Tyne, UKObjective Large language models (LLMs) such as ChatGPT are being developed for use in research, medical education and clinical decision systems. However, as their usage increases, LLMs face ongoing regulatory concerns. This study aims to analyse ChatGPT’s performance on a postgraduate examination to identify areas of strength and weakness, which may provide further insight into their role in healthcare.Design We evaluated the performance of ChatGPT 4 (24 May 2023 version) on official MRCP (Membership of the Royal College of Physicians) parts 1 and 2 written examination practice questions. Statistical analysis was performed using Python. Spearman rank correlation assessed the relationship between the probability of correctly answering a question and two variables: question difficulty and question length. Incorrectly answered questions were analysed further using a clinical reasoning framework to assess the errors made.Setting Online using ChatGPT web interface.Primary and secondary outcome measures Primary outcome was the score (percentage questions correct) in the MRCP postgraduate written examinations. Secondary outcomes were qualitative categorisation of errors using a clinical decision-making framework.Results ChatGPT achieved accuracy rates of 86.3% (part 1) and 70.3% (part 2). Weak but significant correlations were found between ChatGPT’s accuracy and both just-passing rates in part 2 (r=0.34, p=0.0001) and question length in part 1 (r=−0.19, p=0.008). Eight types of error were identified, with the most frequent being factual errors, context errors and omission errors.Conclusion ChatGPT performance greatly exceeded the passing mark for both exams. Multiple choice examinations provide a benchmark for LLM performance which is comparable to human demonstrations of knowledge, while also highlighting the errors LLMs make. Understanding the reasons behind ChatGPT’s errors allows us to develop strategies to prevent them in medical devices that incorporate LLM technology.https://bmjopen.bmj.com/content/14/3/e080558.full
spellingShingle Stuart Maitland
Ross Fowkes
Amy Maitland
Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework
BMJ Open
title Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework
title_full Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework
title_fullStr Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework
title_full_unstemmed Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework
title_short Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework
title_sort can chatgpt pass the mrcp uk written examinations analysis of performance and errors using a clinical decision reasoning framework
url https://bmjopen.bmj.com/content/14/3/e080558.full
work_keys_str_mv AT stuartmaitland canchatgptpassthemrcpukwrittenexaminationsanalysisofperformanceanderrorsusingaclinicaldecisionreasoningframework
AT rossfowkes canchatgptpassthemrcpukwrittenexaminationsanalysisofperformanceanderrorsusingaclinicaldecisionreasoningframework
AT amymaitland canchatgptpassthemrcpukwrittenexaminationsanalysisofperformanceanderrorsusingaclinicaldecisionreasoningframework