An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology
This study evaluates the effectiveness of the ChatGPT-3.5 language model in providing correct answers to pathomorphology questions as required by the State Speciality Examination (PES). Artificial intelligence (AI) in medicine is generating increasing interest, but its potential needs thorough evalu...
Saved in:
Main Authors: | , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Termedia Publishing House
2024-09-01
|
Series: | Polish Journal of Pathology |
Subjects: | |
Online Access: | https://www.termedia.pl/An-investigative-analysis-ChatGPT-s-capability-to-excel-in-the-Polish-speciality-exam-in-pathology,55,54789,1,1.html |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832584706423521280 |
---|---|
author | Michał Bielówka Jakub Kufel Marcin Rojek Dominika Kaczyńska Łukasz Czogalik Adam Mitręga Wiktoria Bartnikowska Dominika Kondoł Kacper Palkij Sylwia Mielcarska |
author_facet | Michał Bielówka Jakub Kufel Marcin Rojek Dominika Kaczyńska Łukasz Czogalik Adam Mitręga Wiktoria Bartnikowska Dominika Kondoł Kacper Palkij Sylwia Mielcarska |
author_sort | Michał Bielówka |
collection | DOAJ |
description | This study evaluates the effectiveness of the ChatGPT-3.5 language model in providing correct answers to pathomorphology questions as required by the State Speciality Examination (PES). Artificial intelligence (AI) in medicine is generating increasing interest, but its potential needs thorough evaluation. A set of 119 exam questions by type and subtype were used, which were posed to the ChatGPT-3.5 model. Performance was analysed with regard to the success rate in different question categories and subtypes.
ChatGPT-3.5 achieved a performance of 45.38%, which is significantly below the minimum PES pass threshold. The results achieved varied by question type and subtype, with better results in questions requiring “comprehension and critical thinking” than “memory”.
The analysis shows that, although ChatGPT-3.5 can be a useful teaching tool, its performance in providing correct answers to pathomorphology questions is significantly lower than that of human respondents. This conclusion highlights the need to further improve the AI model, taking into account the specificities of the medical field. Artificial intelligence can be helpful, but it cannot fully replace the experience and knowledge of specialists. |
format | Article |
id | doaj-art-19025899f7004b068e5e1d6b33eac4db |
institution | Kabale University |
issn | 1233-9687 2084-9869 |
language | English |
publishDate | 2024-09-01 |
publisher | Termedia Publishing House |
record_format | Article |
series | Polish Journal of Pathology |
spelling | doaj-art-19025899f7004b068e5e1d6b33eac4db2025-01-27T11:36:07ZengTermedia Publishing HousePolish Journal of Pathology1233-96872084-98692024-09-0175323624010.5114/pjp.2024.14309154789An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathologyMichał BielówkaJakub KufelMarcin RojekDominika KaczyńskaŁukasz CzogalikAdam MitręgaWiktoria BartnikowskaDominika KondołKacper PalkijSylwia MielcarskaThis study evaluates the effectiveness of the ChatGPT-3.5 language model in providing correct answers to pathomorphology questions as required by the State Speciality Examination (PES). Artificial intelligence (AI) in medicine is generating increasing interest, but its potential needs thorough evaluation. A set of 119 exam questions by type and subtype were used, which were posed to the ChatGPT-3.5 model. Performance was analysed with regard to the success rate in different question categories and subtypes. ChatGPT-3.5 achieved a performance of 45.38%, which is significantly below the minimum PES pass threshold. The results achieved varied by question type and subtype, with better results in questions requiring “comprehension and critical thinking” than “memory”. The analysis shows that, although ChatGPT-3.5 can be a useful teaching tool, its performance in providing correct answers to pathomorphology questions is significantly lower than that of human respondents. This conclusion highlights the need to further improve the AI model, taking into account the specificities of the medical field. Artificial intelligence can be helpful, but it cannot fully replace the experience and knowledge of specialists.https://www.termedia.pl/An-investigative-analysis-ChatGPT-s-capability-to-excel-in-the-Polish-speciality-exam-in-pathology,55,54789,1,1.htmlpathomorphology artificial intelligence language model chatgpt-3.5 specialty examination. |
spellingShingle | Michał Bielówka Jakub Kufel Marcin Rojek Dominika Kaczyńska Łukasz Czogalik Adam Mitręga Wiktoria Bartnikowska Dominika Kondoł Kacper Palkij Sylwia Mielcarska An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology Polish Journal of Pathology pathomorphology artificial intelligence language model chatgpt-3.5 specialty examination. |
title | An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology |
title_full | An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology |
title_fullStr | An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology |
title_full_unstemmed | An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology |
title_short | An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology |
title_sort | investigative analysis chatgpt s capability to excel in the polish speciality exam in pathology |
topic | pathomorphology artificial intelligence language model chatgpt-3.5 specialty examination. |
url | https://www.termedia.pl/An-investigative-analysis-ChatGPT-s-capability-to-excel-in-the-Polish-speciality-exam-in-pathology,55,54789,1,1.html |
work_keys_str_mv | AT michałbielowka aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT jakubkufel aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT marcinrojek aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT dominikakaczynska aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT łukaszczogalik aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT adammitrega aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT wiktoriabartnikowska aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT dominikakondoł aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT kacperpalkij aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT sylwiamielcarska aninvestigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT michałbielowka investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT jakubkufel investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT marcinrojek investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT dominikakaczynska investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT łukaszczogalik investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT adammitrega investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT wiktoriabartnikowska investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT dominikakondoł investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT kacperpalkij investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology AT sylwiamielcarska investigativeanalysischatgptscapabilitytoexcelinthepolishspecialityexaminpathology |