Quality assessment of large language models’ output in maternal health
Abstract Optimising healthcare is linked to broadening access to health literacy in Low- and Middle-Income Countries. The safe and responsible deployment of Large Language Models (LLMs) may provide accurate, reliable, and culturally relevant healthcare information. We aimed to assess the quality of...
Saved in:
| Main Authors: | , , , , , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-03501-x |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849768672543899648 |
|---|---|
| author | Henrique A. Lima Pedro H. F. S. Trocoli-Couto Zorays Moazzam Leonardo C. D. Rocha Adriana Pagano Felipe F. Martins Lucas T. Brabo Zilma S. N. Reis Lisa Keder Aliya Begum Marcelo H. Mamede Timothy M. Pawlik Vivian Resende |
| author_facet | Henrique A. Lima Pedro H. F. S. Trocoli-Couto Zorays Moazzam Leonardo C. D. Rocha Adriana Pagano Felipe F. Martins Lucas T. Brabo Zilma S. N. Reis Lisa Keder Aliya Begum Marcelo H. Mamede Timothy M. Pawlik Vivian Resende |
| author_sort | Henrique A. Lima |
| collection | DOAJ |
| description | Abstract Optimising healthcare is linked to broadening access to health literacy in Low- and Middle-Income Countries. The safe and responsible deployment of Large Language Models (LLMs) may provide accurate, reliable, and culturally relevant healthcare information. We aimed to assess the quality of outputs generated by LLMs addressing maternal health. We employed GPT-4, GPT-3.5, GPT-3.5 custom, Meditron-70b. Using mixed-methods, cross-sectional survey approach, specialists from Brazil, United States, and Pakistan assessed LLM-generated responses in their native languages to a set of three questions relating to maternal health. Evaluators assessed the answers in technical and non-technical scenarios. The LLMs’ responses were evaluated regarding information quality, clarity, readability and adequacy. Of the 47 respondents, 85% were female, mean age of 50 years old, with a mean of 19 years of experience (volume of 110 assisted pregnancies monthly). Scores attributed to answers by GPT-3.5 and GPT-4 were consistently higher [Overall, GPT-3.5, 3.9 (3.8–4.1); GPT-4.0, 3.9 (3.8–4.1); Custom GPT-3.5, 2.7 (2.5–2.8); Meditron-70b, 3.5 (3.3–3.6); p = 0.000]. The responses garnered high scores for clarity (Q&A-1 3.5, Q&A-2 3.7, Q&A-3 3.8) and for quality of content (Q&A-1 3.2, Q&A-2 3.2, Q&A-3 3.7); however, they differed by language. The commonest limitation to quality was incomplete content. Readability analysis indicated that responses may require high educational level for comprehension. Gender bias was detected, as models referred to healthcare professionals as males. Overall, GPT-4 and GPT-3.5 outperformed all other models. These findings highlight the potential of artificial intelligence in improving access to high-quality maternal health information. Given the complex process of generating high-quality non-English databases, it is desirable to incorporate more accurate translation tools and resourceful architectures for contextualization and customisation. |
| format | Article |
| id | doaj-art-95dbfe49fa034f2cb0f143070a89bcb2 |
| institution | DOAJ |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-95dbfe49fa034f2cb0f143070a89bcb22025-08-20T03:03:42ZengNature PortfolioScientific Reports2045-23222025-07-0115111010.1038/s41598-025-03501-xQuality assessment of large language models’ output in maternal healthHenrique A. Lima0Pedro H. F. S. Trocoli-Couto1Zorays Moazzam2Leonardo C. D. Rocha3Adriana Pagano4Felipe F. Martins5Lucas T. Brabo6Zilma S. N. Reis7Lisa Keder8Aliya Begum9Marcelo H. Mamede10Timothy M. Pawlik11Vivian Resende12Federal University of Minas Gerais Faculty of MedicineFederal University of Minas Gerais Faculty of MedicineHenry Ford HospitalFederal University of São João Del-Rei Computer Science DepartmentFederal University of Minas Gerais Arts FacultyAsenionAsenionFederal University of Minas Gerais Faculty of MedicineDepartment of Gynaecology and Obstetrics, The Ohio State University Wexner Medical Center and James Comprehensive Cancer CenterDepartment of Gynaecology and Obstetrics, The Aga Khan UniversityFederal University of Minas Gerais Faculty of MedicineDepartment of Surgery, The Ohio State University Wexner Medical Center and James Comprehensive Cancer CenterFederal University of Minas Gerais Faculty of MedicineAbstract Optimising healthcare is linked to broadening access to health literacy in Low- and Middle-Income Countries. The safe and responsible deployment of Large Language Models (LLMs) may provide accurate, reliable, and culturally relevant healthcare information. We aimed to assess the quality of outputs generated by LLMs addressing maternal health. We employed GPT-4, GPT-3.5, GPT-3.5 custom, Meditron-70b. Using mixed-methods, cross-sectional survey approach, specialists from Brazil, United States, and Pakistan assessed LLM-generated responses in their native languages to a set of three questions relating to maternal health. Evaluators assessed the answers in technical and non-technical scenarios. The LLMs’ responses were evaluated regarding information quality, clarity, readability and adequacy. Of the 47 respondents, 85% were female, mean age of 50 years old, with a mean of 19 years of experience (volume of 110 assisted pregnancies monthly). Scores attributed to answers by GPT-3.5 and GPT-4 were consistently higher [Overall, GPT-3.5, 3.9 (3.8–4.1); GPT-4.0, 3.9 (3.8–4.1); Custom GPT-3.5, 2.7 (2.5–2.8); Meditron-70b, 3.5 (3.3–3.6); p = 0.000]. The responses garnered high scores for clarity (Q&A-1 3.5, Q&A-2 3.7, Q&A-3 3.8) and for quality of content (Q&A-1 3.2, Q&A-2 3.2, Q&A-3 3.7); however, they differed by language. The commonest limitation to quality was incomplete content. Readability analysis indicated that responses may require high educational level for comprehension. Gender bias was detected, as models referred to healthcare professionals as males. Overall, GPT-4 and GPT-3.5 outperformed all other models. These findings highlight the potential of artificial intelligence in improving access to high-quality maternal health information. Given the complex process of generating high-quality non-English databases, it is desirable to incorporate more accurate translation tools and resourceful architectures for contextualization and customisation.https://doi.org/10.1038/s41598-025-03501-xMaternal health educationLarge language modelsEvaluationLow- and Middle-Income countries |
| spellingShingle | Henrique A. Lima Pedro H. F. S. Trocoli-Couto Zorays Moazzam Leonardo C. D. Rocha Adriana Pagano Felipe F. Martins Lucas T. Brabo Zilma S. N. Reis Lisa Keder Aliya Begum Marcelo H. Mamede Timothy M. Pawlik Vivian Resende Quality assessment of large language models’ output in maternal health Scientific Reports Maternal health education Large language models Evaluation Low- and Middle-Income countries |
| title | Quality assessment of large language models’ output in maternal health |
| title_full | Quality assessment of large language models’ output in maternal health |
| title_fullStr | Quality assessment of large language models’ output in maternal health |
| title_full_unstemmed | Quality assessment of large language models’ output in maternal health |
| title_short | Quality assessment of large language models’ output in maternal health |
| title_sort | quality assessment of large language models output in maternal health |
| topic | Maternal health education Large language models Evaluation Low- and Middle-Income countries |
| url | https://doi.org/10.1038/s41598-025-03501-x |
| work_keys_str_mv | AT henriquealima qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT pedrohfstrocolicouto qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT zoraysmoazzam qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT leonardocdrocha qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT adrianapagano qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT felipefmartins qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT lucastbrabo qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT zilmasnreis qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT lisakeder qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT aliyabegum qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT marcelohmamede qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT timothympawlik qualityassessmentoflargelanguagemodelsoutputinmaternalhealth AT vivianresende qualityassessmentoflargelanguagemodelsoutputinmaternalhealth |