ChatGPT-4 Omni’s superiority in answering multiple-choice oral radiology questions
Abstract Objectives This study evaluates and compares the performance of ChatGPT-3.5, ChatGPT-4 Omni (4o), Google Bard, and Microsoft Copilot in responding to text-based multiple-choice questions related to oral radiology, as featured in the Dental Specialty Admission Exam conducted in Türkiye. Mate...
Saved in:
| Main Author: | Melek Tassoker |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-02-01
|
| Series: | BMC Oral Health |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12903-025-05554-w |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Large language models’ capabilities in responding to tuberculosis medical questions: testing ChatGPT, Gemini, and Copilot
by: Meisam Dastani, et al.
Published: (2025-05-01) -
Comparative analysis of AI chatbot (ChatGPT-4.0 and Microsoft Copilot) and expert responses to common orthodontic questions: patient and orthodontist evaluations
by: Farhad Salmanpour, et al.
Published: (2025-06-01) -
A review on enhancing education with AI: exploring the potential of ChatGPT, Bard, and generative AI
by: Anduamlak Abebe Fenta
Published: (2025-02-01) -
Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology
by: Patricia Leutz-Schmidt, et al.
Published: (2025-02-01) -
Comparative analysis of ChatGPT and Gemini (Bard) in medical inquiry: a scoping review
by: Fattah H. Fattah, et al.
Published: (2025-02-01)