Is it a pediatric orthopaedic urgency or not? Can ChatGPT answer this question?
Abstract Background Artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT, is increasingly studied in healthcare. This study evaluated the accuracy and reliability of the ChatGPT in guiding families on whether pediatric orthopaedic symptoms warrant emergency or outp...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-06-01
|
| Series: | Journal of Orthopaedic Surgery and Research |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s13018-025-05981-z |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Background Artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT, is increasingly studied in healthcare. This study evaluated the accuracy and reliability of the ChatGPT in guiding families on whether pediatric orthopaedic symptoms warrant emergency or outpatient care. Methods Five common pediatric orthopaedic scenarios were developed, and ChatGPT was queried via a family-like language. For each scenario, two questions were asked: an initial general query and a follow-up query regarding the need for emergency versus outpatient care. ChatGPT’s responses were evaluated for accuracy by analysing medical literature and online materials, completeness, and conciseness by two independent pediatric orthopaedic consultants, using a modified Likert scale. Responses were classified from “perfect” to “poor” based on total scores. Results ChatGPT responded to 5 different scenarios commonly encountered in pediatric orthopaedics. The scores ranged from 8 to 10, with most responses requiring minimal clarification. While ChatGPT demonstrated strong diagnostic reasoning and actionable advice, occasional inaccuracies, such as recommending elevation for SCFE, highlighted areas for improvement. Conclusion ChatGPT demonstrates potential as a supplemental tool for patient education and triage in pediatric orthopaedics, with generally accurate and accessible responses. These findings echo prior research on ChatGPT’s potential and challenges in orthopaedics, emphasizing its role as a supplemental, not standalone, resource. While its strengths in providing accurate and accessible advice are evident, its limitations necessitate further refinement and cautious use under human supervision. Continued advancements in AI may further improve its safe integration into clinical care in pediartic orthopaedics. |
|---|---|
| ISSN: | 1749-799X |