Search for medical information for chronic rhinosinusitis through an artificial intelligence ChatBot

Abstract Objectives Artificial intelligence is evolving and significantly impacting health care, promising to transform access to medical information. With the rise of medical misinformation and frequent internet searches for health‐related advice, there is a growing demand for reliable patient info...

Full description

Saved in:
Bibliographic Details
Main Authors: Arsany Yassa, Olivia Ayad, David Avery Cohen, Aman M. Patel, Ved A. Vengsarkar, Michael S. Hegazin, Andrey Filimonov, Wayne D. Hsueh, Jean Anderson Eloy
Format: Article
Language:English
Published: Wiley 2024-10-01
Series:Laryngoscope Investigative Otolaryngology
Subjects:
Online Access:https://doi.org/10.1002/lio2.70009
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Objectives Artificial intelligence is evolving and significantly impacting health care, promising to transform access to medical information. With the rise of medical misinformation and frequent internet searches for health‐related advice, there is a growing demand for reliable patient information. This study assesses the effectiveness of ChatGPT in providing information and treatment options for chronic rhinosinusitis (CRS). Methods Six inputs were entered into ChatGPT regarding the definition, prevalence, causes, symptoms, treatment options, and postoperative complications of CRS. International Consensus Statement on Allergy and Rhinology guidelines for Rhinosinusitis was the gold standard for evaluating the answers. The inputs were categorized into three categories and Flesch–Kincaid readability, ANOVA and trend analysis tests were used to assess them. Results Although some discrepancies were found regarding CRS, ChatGPT's answers were largely in line with existing literature. Mean Flesch Reading Ease, Flesch–Kincaid Grade Level and passive voice percentage were (40.7%, 12.15%, 22.5%) for basic information and prevalence category, (47.5%, 11.2%, 11.1%) for causes and symptoms category, (33.05%, 13.05%, 22.25%) for treatment and complications, and (40.42%, 12.13%, 18.62%) across all categories. ANOVA indicated no statistically significant differences in readability across the categories (p‐values: Flesch Reading Ease = 0.385, Flesch–Kincaid Grade Level = 0.555, Passive Sentences = 0.601). Trend analysis revealed readability varied slightly, with a general increase in complexity. Conclusion ChatGPT is a developing tool potentially useful for patients and medical professionals to access medical information. However, caution is advised as its answers may not be fully accurate compared to clinical guidelines or suitable for patients with varying educational backgrounds. Level of evidence: 4.
ISSN:2378-8038