Optimizing patient education for radioactive iodine therapy and the role of ChatGPT incorporating chain-of-thought technique: ChatGPT questionnaire

Background ChatGPT has the potential to enhance patient education by offering clear and accurate responses, but its reliability in providing precise medical information is still under investigation. This study evaluates the effectiveness in assisting healthcare professionals with patient inquiries a...

Full description

Saved in:
Bibliographic Details
Main Authors: Chao-Wei Tsai, Yi-Jing Lin, Jing-Uei Hou, Shih-Chuan Tsai, Pei-Chun Yeh, Chia-Hung Kao
Format: Article
Language:English
Published: SAGE Publishing 2025-07-01
Series:Digital Health
Online Access:https://doi.org/10.1177/20552076251357468
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Background ChatGPT has the potential to enhance patient education by offering clear and accurate responses, but its reliability in providing precise medical information is still under investigation. This study evaluates the effectiveness in assisting healthcare professionals with patient inquiries about radioiodine therapy. Methods This study used OpenAI's GPT-4o and GPT-4 models, with each query submitted as a separate prompt. Chain-of-thought prompting was utilized to require the model to articulate its step-by-step reasoning prior to the final answer, thereby making the decision process transparent for qualitative evaluation. Three responses were generated per prompt and evaluated by three nuclear medicine doctors using a 4-point Likert scale across five aspects: Appropriateness, Helpfulness, Consistency, Validity of References, and Empathy. Normality test, Wilcoxon signed-rank test, and chi-square tests were used for analysis. Results A total of 126 paired responses from GPT-4 and GPT-4o were independently rated by three nuclear-medicine physicians. Both models performed similarly across the main dimensions—appropriateness, helpfulness, consistency, and validity of reference—with no statistically significant differences (Wilcoxon signed-rank, p  ≥ 0.01). High-level ratings (score ≥ 3) were achieved in appropriateness for 90.4% of GPT-4 outputs and 84.9% of GPT-4o outputs, and in helpfulness for 92.1% of outputs from both models. Citation accuracy was limited: fully valid references were present in 20.6% of GPT-4 and 21.4% of GPT-4o responses. Empathy was judged present in 56.3% of GPT-4 and 66.7% of GPT-4o answers (χ², p  > 0.05). There was low inter-rater agreement (Fleiss κ = 0.04). Conclusion The results suggest that ChatGPT can furnish generally appropriate and helpful answers to frequently asked questions in radioactive iodine treatment, yet citation accuracy remains limited, underscoring the need for clinician oversight. GPT-4o and GPT-4 demonstrated comparable performance, indicating that model selection within this family has minimal impact under the controlled conditions studied.
ISSN:2055-2076