Comparative Evaluation of Multimodal Large Language Models for No-Reference Image Quality Assessment with Authentic Distortions: A Study of OpenAI and Claude.AI Models
This study presents a comparative analysis of several multimodal large language models (LLMs) for no-reference image quality assessment, with a particular focus on images containing authentic distortions. We evaluate three models developed by OpenAI and three models from Claude.AI, comparing their p...
Saved in:
| Main Author: | Domonkos Varga |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Big Data and Cognitive Computing |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2504-2289/9/5/132 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
System 2 Thinking in OpenAI’s o1-Preview Model: Near-Perfect Performance on a Mathematics Exam
by: Joost C. F. de Winter, et al.
Published: (2024-10-01) -
Multimodal AI and Large Language Models for Orthopantomography Radiology Report Generation and Q&A
by: Chirath Dasanayaka, et al.
Published: (2025-03-01) -
Assessing how accurately large language models encode and apply the common European framework of reference for languages
by: Luca Benedetto, et al.
Published: (2025-06-01) -
Coherent Interpretation of Entire Visual Field Test Reports Using a Multimodal Large Language Model (ChatGPT)
by: Jeremy C. K. Tan
Published: (2025-04-01) -
OpenAI o1 Large Language Model Outperforms GPT-4o, Gemini 1.5 Flash, and Human Test Takers on Ophthalmology Board–Style Questions
by: Ryan Shean, BA, et al.
Published: (2025-11-01)