Comparative evaluation of responses from DeepSeek-R1, ChatGPT-o1, ChatGPT-4, and dental GPT chatbots to patient inquiries about dental and maxillofacial prostheses

Abstract Background Artificial intelligence chatbots have the potential to inform and guide patients by providing human-like responses to questions about dental and maxillofacial prostheses. Information regarding the accuracy and qualifications of these responses is limited. This in-silico study aim...

Full description

Saved in:
Bibliographic Details
Main Authors: Tuğgen Özcivelek, Berna Özcan
Format: Article
Language:English
Published: BMC 2025-05-01
Series:BMC Oral Health
Subjects:
Online Access:https://doi.org/10.1186/s12903-025-06267-w
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Background Artificial intelligence chatbots have the potential to inform and guide patients by providing human-like responses to questions about dental and maxillofacial prostheses. Information regarding the accuracy and qualifications of these responses is limited. This in-silico study aimed to evaluate the accuracy, quality, readability, understandability, and actionability of the responses from DeepSeek-R1, ChatGPT-o1, ChatGPT-4, and Dental GPT chatbots. Methods Four chatbots were queried about 35 of the most frequently asked patient questions about their prostheses. The accuracy, quality, understandability, and actionability of the responses were assessed by two prosthodontists using five-point Likert scale, Global Quality Score, and Patient Education Materials Assessment Tool for Printed Materials scales, respectively. Readability was scored using the Flesch-Kincaid Grade Level and Flesch Reading Ease. The agreement was assessed using the Cohen Kappa test. Differences between chatbots were analyzed using the Kruskal-Wallis test, one-way ANOVA, and post-hoc tests. Results Chatbots showed a significant difference in accuracy and readability (p <.05). Dental GPT recorded the highest accuracy score, whereas ChatGPT-4 had the lowest. DeepSeek-R1 performed best, while Dental GPT had the lowest performance in readability. Quality, understandability, actionability, and reader education scores did not show significant differences. Conclusions While accuracy may vary among chatbots, the domain-specific trained AI tool and ChatGPT-o1 demonstrated superior accuracy. Even if accuracy is high, misinformation in health care can have significant consequences. Enhancing the readability of the responses is essential, and chatbots should be chosen accordingly. The accuracy and readability of information from chatbots should be monitored for public health.
ISSN:1472-6831