Performance evaluation of large language models for the national nursing examination in Japan
Objectives Large language models (LLMs) are increasingly used in healthcare, with the potential for various applications. However, the performance of different LLMs on nursing license exams and their tendencies to make errors remain unclear. This study aimed to evaluate the accuracy of LLMs on basic...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
SAGE Publishing
2025-05-01
|
| Series: | Digital Health |
| Online Access: | https://doi.org/10.1177/20552076251346571 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849717747805585408 |
|---|---|
| author | Tomoki Kuribara Kengo Hirayama Kenji Hirata |
| author_facet | Tomoki Kuribara Kengo Hirayama Kenji Hirata |
| author_sort | Tomoki Kuribara |
| collection | DOAJ |
| description | Objectives Large language models (LLMs) are increasingly used in healthcare, with the potential for various applications. However, the performance of different LLMs on nursing license exams and their tendencies to make errors remain unclear. This study aimed to evaluate the accuracy of LLMs on basic nursing knowledge and identify trends in incorrect answers. Methods The dataset consisted of 692 questions from the Japanese national nursing examinations over the past 3 years (2021–2023) that were structured with 240 multiple-choice questions per year and a total score of 300 points. The LLMs tested were ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot. Questions were manually entered into each LLM, and their answers were collected. Accuracy rates were calculated to assess whether the LLMs could pass the exam, and deductive content analysis and Chi-squared tests were conducted to identify the tendency of incorrect answers. Results For over 3 years, the mean total score and standard deviation (SD) using ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot was 180.3 ± 22.2, 251.0 ± 13.1, and 256.7 ± 14.0, respectively. ChatGPT-4 and Microsoft Copilot showed sufficient accuracy rates to pass the examinations for all the years. All LLMs made more mistakes in the health support and social security system domains ( p < 0.01). Conclusions ChatGPT-4 and Microsoft Copilot may perform better than Chat GPT-3.5, and LLMs could incorrectly answer questions about laws and demographic data specific to a particular country. |
| format | Article |
| id | doaj-art-8a1b27cce39740c9a0d3ca1649a36702 |
| institution | DOAJ |
| issn | 2055-2076 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | SAGE Publishing |
| record_format | Article |
| series | Digital Health |
| spelling | doaj-art-8a1b27cce39740c9a0d3ca1649a367022025-08-20T03:12:35ZengSAGE PublishingDigital Health2055-20762025-05-011110.1177/20552076251346571Performance evaluation of large language models for the national nursing examination in JapanTomoki Kuribara0Kengo Hirayama1Kenji Hirata2 Department of Biostatistics, Graduate School of Medicine, Sapporo, Japan School of Nursing, , Sapporo, Japan Department of Diagnostic Imaging, Faculty of Medicine, , Sapporo, JapanObjectives Large language models (LLMs) are increasingly used in healthcare, with the potential for various applications. However, the performance of different LLMs on nursing license exams and their tendencies to make errors remain unclear. This study aimed to evaluate the accuracy of LLMs on basic nursing knowledge and identify trends in incorrect answers. Methods The dataset consisted of 692 questions from the Japanese national nursing examinations over the past 3 years (2021–2023) that were structured with 240 multiple-choice questions per year and a total score of 300 points. The LLMs tested were ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot. Questions were manually entered into each LLM, and their answers were collected. Accuracy rates were calculated to assess whether the LLMs could pass the exam, and deductive content analysis and Chi-squared tests were conducted to identify the tendency of incorrect answers. Results For over 3 years, the mean total score and standard deviation (SD) using ChatGPT-3.5, ChatGPT-4, and Microsoft Copilot was 180.3 ± 22.2, 251.0 ± 13.1, and 256.7 ± 14.0, respectively. ChatGPT-4 and Microsoft Copilot showed sufficient accuracy rates to pass the examinations for all the years. All LLMs made more mistakes in the health support and social security system domains ( p < 0.01). Conclusions ChatGPT-4 and Microsoft Copilot may perform better than Chat GPT-3.5, and LLMs could incorrectly answer questions about laws and demographic data specific to a particular country.https://doi.org/10.1177/20552076251346571 |
| spellingShingle | Tomoki Kuribara Kengo Hirayama Kenji Hirata Performance evaluation of large language models for the national nursing examination in Japan Digital Health |
| title | Performance evaluation of large language models for the national nursing examination in Japan |
| title_full | Performance evaluation of large language models for the national nursing examination in Japan |
| title_fullStr | Performance evaluation of large language models for the national nursing examination in Japan |
| title_full_unstemmed | Performance evaluation of large language models for the national nursing examination in Japan |
| title_short | Performance evaluation of large language models for the national nursing examination in Japan |
| title_sort | performance evaluation of large language models for the national nursing examination in japan |
| url | https://doi.org/10.1177/20552076251346571 |
| work_keys_str_mv | AT tomokikuribara performanceevaluationoflargelanguagemodelsforthenationalnursingexaminationinjapan AT kengohirayama performanceevaluationoflargelanguagemodelsforthenationalnursingexaminationinjapan AT kenjihirata performanceevaluationoflargelanguagemodelsforthenationalnursingexaminationinjapan |