An Empirical Evaluation of Large Language Models on Consumer Health Questions
<b>Background:</b> Large Language Models (LLMs) have demonstrated strong performances in clinical question-answering (QA) benchmarks, yet their effectiveness in addressing real-world consumer medical queries remains underexplored. This study evaluates the capabilities and limitations of...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-02-01
|
| Series: | BioMedInformatics |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-7426/5/1/12 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | <b>Background:</b> Large Language Models (LLMs) have demonstrated strong performances in clinical question-answering (QA) benchmarks, yet their effectiveness in addressing real-world consumer medical queries remains underexplored. This study evaluates the capabilities and limitations of LLMs in answering consumer health questions using the MedRedQA dataset, which consists of medical questions and answers by verified experts from the AskDocs subreddit. <b>Methods:</b> Five LLMs-GPT-4o mini, Llama 3.1-70B, Mistral-123B, Mistral-7B, and Gemini-Flash were assessed using a cross-evaluation framework. Each model generated responses to consumer queries and their outputs were evaluated by every model by comparing them with expert responses. Human evaluation was used to assess the reliability of models as evaluators. <b>Results:</b> GPT-4o mini achieved the highest alignment with expert responses according to four out of the five models’ judges, while Mistral-7B scored the lowest according to three out of five models’ judges. Overall, model responses show low alignment with expert responses. <b>Conclusions:</b> Current small or medium sized LLMs struggle to provide accurate answers to consumer health questions and must be significantly improved. |
|---|---|
| ISSN: | 2673-7426 |