Artificial intelligence chatbots as sources for patient education material on child abuse

Background: The World Health Organization defines childhood maltreatment as any form of abuse or neglect affecting children under 18 years of age that can cause actual or potential harm. Child abuse is a form of interpersonal trauma that can critically impact neurodevelopment and increase the risk o...

Full description

Saved in:
Bibliographic Details
Main Authors: Lily Nguyen, Viet Tran, Joy Li, Denise Baughn, Joseph Shotwell, Kimberly Gushanas, Sayyeda Hasan, Lisa Falls, Rocksheng Zhong
Format: Article
Language:English
Published: Elsevier 2025-07-01
Series:Child Protection and Practice
Online Access:http://www.sciencedirect.com/science/article/pii/S2950193825000749
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Background: The World Health Organization defines childhood maltreatment as any form of abuse or neglect affecting children under 18 years of age that can cause actual or potential harm. Child abuse is a form of interpersonal trauma that can critically impact neurodevelopment and increase the risk of developing psychiatric disorders. With the increasing power and accessibility of artificial intelligence (AI) large language models, patients may turn to these platforms as sources of medical information. To date, no studies have evaluated the use of AI in creating patient education materials in childhood maltreatment and the field of psychiatry. Methods: Eight questions on child abuse from the National Child Traumatic Stress Network (NCTSN) were input into ChatGPT, Google Gemini, and Microsoft Copilot. A team of child psychiatrists and a pediatric psychologist reviewed and scored the responses by NCTSN and each AI, assessing quality, understandability, and actionability. Secondary outcomes included misinformation, readability, word count, and top references. Results: The analysis of 32 responses showed good quality (mean DISCERN score 51.7) and moderate understandability (mean PEMAT 76.5 %). However, actionability was poor (mean PEMAT 64 %). Responses averaged a tenth-grade reading level, with ChatGPT being more difficult to read than NCTSN. AI-generated responses were significantly longer (p < 0.001). Conclusions: Findings of this study suggest that AI chatbots may currently be able to provide accurate, quality information on child abuse comparable to authoritative sources, albeit of significantly greater length. However, all sources lack actionability and exceed recommended reading levels, which limits effectiveness. These constraints suggest that AI chatbots should supplement rather than replace primary medical information sources. Urgent efforts are needed to improve the accessibility, readability, and actionability of patient education materials generated by AI and standardized sources on topics like child abuse and neglect.
ISSN:2950-1938