ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study
BackgroundThe increasing use of ChatGPT in clinical practice and medical education necessitates the evaluation of its reliability, particularly in geriatrics. ObjectiveThis study aimed to evaluate ChatGPT’s trustworthiness in geriatrics through 3 distinct approach...
Saved in:
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
JMIR Publications
2025-01-01
|
Series: | JMIR Formative Research |
Online Access: | https://formative.jmir.org/2025/1/e63494 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841560820577730560 |
---|---|
author | Huai Yong Cheng |
author_facet | Huai Yong Cheng |
author_sort | Huai Yong Cheng |
collection | DOAJ |
description |
BackgroundThe increasing use of ChatGPT in clinical practice and medical education necessitates the evaluation of its reliability, particularly in geriatrics.
ObjectiveThis study aimed to evaluate ChatGPT’s trustworthiness in geriatrics through 3 distinct approaches: evaluating ChatGPT’s geriatrics attitude, knowledge, and clinical application with 2 vignettes of geriatric syndromes (polypharmacy and falls).
MethodsWe used the validated University of California, Los Angeles, geriatrics attitude and knowledge instruments to evaluate ChatGPT’s geriatrics attitude and knowledge and compare its performance with that of medical students, residents, and geriatrics fellows from reported results in the literature. We also evaluated ChatGPT’s application to 2 vignettes of geriatric syndromes (polypharmacy and falls).
ResultsThe mean total score on geriatrics attitude of ChatGPT was significantly lower than that of trainees (medical students, internal medicine residents, and geriatric medicine fellows; 2.7 vs 3.7 on a scale from 1-5; 1=strongly disagree; 5=strongly agree). The mean subscore on positive geriatrics attitude of ChatGPT was higher than that of the trainees (medical students, internal medicine residents, and neurologists; 4.1 vs 3.7 on a scale from 1 to 5 where a higher score means a more positive attitude toward older adults). The mean subscore on negative geriatrics attitude of ChatGPT was lower than that of the trainees and neurologists (1.8 vs 2.8 on a scale from 1 to 5 where a lower subscore means a less negative attitude toward aging). On the University of California, Los Angeles geriatrics knowledge test, ChatGPT outperformed all medical students, internal medicine residents, and geriatric medicine fellows from validated studies (14.7 vs 11.3 with a score range of –18 to +18 where +18 means that all questions were answered correctly). Regarding the polypharmacy vignette, ChatGPT not only demonstrated solid knowledge of potentially inappropriate medications but also accurately identified 7 common potentially inappropriate medications and 5 drug-drug and 3 drug-disease interactions. However, ChatGPT missed 5 drug-disease and 1 drug-drug interaction and produced 2 hallucinations. Regarding the fall vignette, ChatGPT answered 3 of 5 pretests correctly and 2 of 5 pretests partially correctly, identified 6 categories of fall risks, followed fall guidelines correctly, listed 6 key physical examinations, and recommended 6 categories of fall prevention methods.
ConclusionsThis study suggests that ChatGPT can be a valuable supplemental tool in geriatrics, offering reliable information with less age bias, robust geriatrics knowledge, and comprehensive recommendations for managing 2 common geriatric syndromes (polypharmacy and falls) that are consistent with evidence from guidelines, systematic reviews, and other types of studies. ChatGPT’s potential as an educational and clinical resource could significantly benefit trainees, health care providers, and laypeople. Further research using GPT-4o, larger geriatrics question sets, and more geriatric syndromes is needed to expand and confirm these findings before adopting ChatGPT widely for geriatrics education and practice. |
format | Article |
id | doaj-art-f3e127e6c3fa4ca391c5a6b0d7ffa0d7 |
institution | Kabale University |
issn | 2561-326X |
language | English |
publishDate | 2025-01-01 |
publisher | JMIR Publications |
record_format | Article |
series | JMIR Formative Research |
spelling | doaj-art-f3e127e6c3fa4ca391c5a6b0d7ffa0d72025-01-03T14:00:29ZengJMIR PublicationsJMIR Formative Research2561-326X2025-01-019e6349410.2196/63494ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational StudyHuai Yong Chenghttps://orcid.org/0009-0004-9501-7143 BackgroundThe increasing use of ChatGPT in clinical practice and medical education necessitates the evaluation of its reliability, particularly in geriatrics. ObjectiveThis study aimed to evaluate ChatGPT’s trustworthiness in geriatrics through 3 distinct approaches: evaluating ChatGPT’s geriatrics attitude, knowledge, and clinical application with 2 vignettes of geriatric syndromes (polypharmacy and falls). MethodsWe used the validated University of California, Los Angeles, geriatrics attitude and knowledge instruments to evaluate ChatGPT’s geriatrics attitude and knowledge and compare its performance with that of medical students, residents, and geriatrics fellows from reported results in the literature. We also evaluated ChatGPT’s application to 2 vignettes of geriatric syndromes (polypharmacy and falls). ResultsThe mean total score on geriatrics attitude of ChatGPT was significantly lower than that of trainees (medical students, internal medicine residents, and geriatric medicine fellows; 2.7 vs 3.7 on a scale from 1-5; 1=strongly disagree; 5=strongly agree). The mean subscore on positive geriatrics attitude of ChatGPT was higher than that of the trainees (medical students, internal medicine residents, and neurologists; 4.1 vs 3.7 on a scale from 1 to 5 where a higher score means a more positive attitude toward older adults). The mean subscore on negative geriatrics attitude of ChatGPT was lower than that of the trainees and neurologists (1.8 vs 2.8 on a scale from 1 to 5 where a lower subscore means a less negative attitude toward aging). On the University of California, Los Angeles geriatrics knowledge test, ChatGPT outperformed all medical students, internal medicine residents, and geriatric medicine fellows from validated studies (14.7 vs 11.3 with a score range of –18 to +18 where +18 means that all questions were answered correctly). Regarding the polypharmacy vignette, ChatGPT not only demonstrated solid knowledge of potentially inappropriate medications but also accurately identified 7 common potentially inappropriate medications and 5 drug-drug and 3 drug-disease interactions. However, ChatGPT missed 5 drug-disease and 1 drug-drug interaction and produced 2 hallucinations. Regarding the fall vignette, ChatGPT answered 3 of 5 pretests correctly and 2 of 5 pretests partially correctly, identified 6 categories of fall risks, followed fall guidelines correctly, listed 6 key physical examinations, and recommended 6 categories of fall prevention methods. ConclusionsThis study suggests that ChatGPT can be a valuable supplemental tool in geriatrics, offering reliable information with less age bias, robust geriatrics knowledge, and comprehensive recommendations for managing 2 common geriatric syndromes (polypharmacy and falls) that are consistent with evidence from guidelines, systematic reviews, and other types of studies. ChatGPT’s potential as an educational and clinical resource could significantly benefit trainees, health care providers, and laypeople. Further research using GPT-4o, larger geriatrics question sets, and more geriatric syndromes is needed to expand and confirm these findings before adopting ChatGPT widely for geriatrics education and practice.https://formative.jmir.org/2025/1/e63494 |
spellingShingle | Huai Yong Cheng ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study JMIR Formative Research |
title | ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study |
title_full | ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study |
title_fullStr | ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study |
title_full_unstemmed | ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study |
title_short | ChatGPT’s Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study |
title_sort | chatgpt s attitude knowledge and clinical application in geriatrics practice and education exploratory observational study |
url | https://formative.jmir.org/2025/1/e63494 |
work_keys_str_mv | AT huaiyongcheng chatgptsattitudeknowledgeandclinicalapplicationingeriatricspracticeandeducationexploratoryobservationalstudy |