Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics

Avishek Pal,1 Tenzin Wangmo,1 Trishna Bharadia,2,3 Mithi Ahmed-Richards,4,5 Mayank Bhailalbhai Bhanderi,6 Rohitbhai Kachhadiya,6 Samuel S Allemann,7 Bernice Simone Elger1,8 1Institute for Biomedical Ethics, University of Basel, Basel, Switzerland; 2Patient Author, The Spark Global, Buckinghamshire,...

Full description

Saved in:
Bibliographic Details
Main Authors: Pal A, Wangmo T, Bharadia T, Ahmed-Richards M, Bhanderi MB, Kachhadiya R, Allemann SS, Elger BS
Format: Article
Language:English
Published: Dove Medical Press 2025-07-01
Series:Patient Preference and Adherence
Subjects:
Online Access:https://www.dovepress.com/generative-aillms-for-plain-language-medical-information-for-patients--peer-reviewed-fulltext-article-PPA
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849246033019666432
author Pal A
Wangmo T
Bharadia T
Ahmed-Richards M
Bhanderi MB
Kachhadiya R
Allemann SS
Elger BS
author_facet Pal A
Wangmo T
Bharadia T
Ahmed-Richards M
Bhanderi MB
Kachhadiya R
Allemann SS
Elger BS
author_sort Pal A
collection DOAJ
description Avishek Pal,1 Tenzin Wangmo,1 Trishna Bharadia,2,3 Mithi Ahmed-Richards,4,5 Mayank Bhailalbhai Bhanderi,6 Rohitbhai Kachhadiya,6 Samuel S Allemann,7 Bernice Simone Elger1,8 1Institute for Biomedical Ethics, University of Basel, Basel, Switzerland; 2Patient Author, The Spark Global, Buckinghamshire, UK; 3Centre for Pharmaceutical Medicine Research, King’s College London, London, UK; 4Current Medical Research & Opinion, Taylor & Francis Group, London, UK; 5Patient Author, Scleroderma and Raynauds UK, London, United Kingdom; 6Innomagine Consulting Private Limited, Hyderabad, India; 7Department of Pharmaceutical Sciences, University of Basel, Basel, Switzerland; 8Center for Legal Medicine, University of Geneva, Geneva, SwitzerlandCorrespondence: Avishek Pal, Institute for Biomedical Ethics, University of Basel, Bernoullistrasse 28, Basel, 4056, Switzerland, Tel +41 79 835 0983, Email Avishek.pal@unibas.chAbstract: Generative artificial intelligence (gAI) tools and large language models (LLMs) are gaining popularity among non-specialist audiences (patients, caregivers, and the general public) as a source of plain language medical information. AI-based models have the potential to act as a convenient, customizable and easy-to-access source of information that can improve patients’ self-care and health literacy and enable greater engagement with clinicians. However, serious negative outcomes could occur if these tools fail to provide reliable, relevant and understandable medical information. Herein, we review published findings on opportunities and risks associated with such use of gAI/LLMs. We reviewed 44 articles published between January 2023 and July 2024. From the included articles, we find a focus on readability and accuracy; however, only three studies involved actual patients. Responses were reported to be reasonably accurate and sufficiently readable and detailed. The most commonly reported risks were oversimplification, over-generalization, lower accuracy in response to complex questions, and lack of transparency regarding information sources. There are ethical concerns that overreliance/unsupervised reliance on gAI/LLMs could lead to the “humanizing” of these models and pose a risk to patient health equity, inclusiveness and data privacy. For these technologies to be truly transformative, they must become more transparent, have appropriate governance and monitoring, and incorporate feedback from healthcare professionals (HCPs), patients, and other experts. Uptake of these technologies will also need education and awareness among non-specialist audiences around their optimal use as sources of plain language medical information.Plain language summary: More and more people are using special computer programs called artificial intelligence (AI) or large language models (LLMs) to find and get medical facts in simple words they can understand. This can help people take better care of themselves, learn about their health, and talk with their doctors. We found that AI/LLMs generally provided correct and helpful information to people. However, there could also be a risk of incorrect or unreliable information in certain situations if the question is complex. This can cause harm to people if they use this information to make their own medical decisions. Also, gAI/LLMs provide human-like responses, which make people trust them more than they should. There could be a risk that people may share their medical information with AI/LLMs, which could get into the wrong hands. To make sure these programs really help people, they need to be clear about how they work, and they must have good rules to follow and take advice from doctors and patients to improve performance. People also need to be trained on how to best use these AI tools to find easy-to-understand and reliable medical information. It is important for doctors, patients and other health workers to help make sure the AI is producing reliable and understandable medical information for patients.Keywords: artificial intelligence, large language model, ethics, health literacy, plain language summary
format Article
id doaj-art-cea66eb2cb3f436fbc44a0df4ac3d79d
institution Kabale University
issn 1177-889X
language English
publishDate 2025-07-01
publisher Dove Medical Press
record_format Article
series Patient Preference and Adherence
spelling doaj-art-cea66eb2cb3f436fbc44a0df4ac3d79d2025-08-20T03:58:36ZengDove Medical PressPatient Preference and Adherence1177-889X2025-07-01Volume 19Issue 122272249105326Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and EthicsPal A0Wangmo T1Bharadia T2Ahmed-Richards M3Bhanderi MB4Kachhadiya RAllemann SS5Elger BS6Institute for Biomedical EthicsInstitute for Biomedical EthicsPatient AuthorPatient AuthorMedical affairsDepartment of Pharmaceutical SciencesInstitute for Biomedical EthicsAvishek Pal,1 Tenzin Wangmo,1 Trishna Bharadia,2,3 Mithi Ahmed-Richards,4,5 Mayank Bhailalbhai Bhanderi,6 Rohitbhai Kachhadiya,6 Samuel S Allemann,7 Bernice Simone Elger1,8 1Institute for Biomedical Ethics, University of Basel, Basel, Switzerland; 2Patient Author, The Spark Global, Buckinghamshire, UK; 3Centre for Pharmaceutical Medicine Research, King’s College London, London, UK; 4Current Medical Research & Opinion, Taylor & Francis Group, London, UK; 5Patient Author, Scleroderma and Raynauds UK, London, United Kingdom; 6Innomagine Consulting Private Limited, Hyderabad, India; 7Department of Pharmaceutical Sciences, University of Basel, Basel, Switzerland; 8Center for Legal Medicine, University of Geneva, Geneva, SwitzerlandCorrespondence: Avishek Pal, Institute for Biomedical Ethics, University of Basel, Bernoullistrasse 28, Basel, 4056, Switzerland, Tel +41 79 835 0983, Email Avishek.pal@unibas.chAbstract: Generative artificial intelligence (gAI) tools and large language models (LLMs) are gaining popularity among non-specialist audiences (patients, caregivers, and the general public) as a source of plain language medical information. AI-based models have the potential to act as a convenient, customizable and easy-to-access source of information that can improve patients’ self-care and health literacy and enable greater engagement with clinicians. However, serious negative outcomes could occur if these tools fail to provide reliable, relevant and understandable medical information. Herein, we review published findings on opportunities and risks associated with such use of gAI/LLMs. We reviewed 44 articles published between January 2023 and July 2024. From the included articles, we find a focus on readability and accuracy; however, only three studies involved actual patients. Responses were reported to be reasonably accurate and sufficiently readable and detailed. The most commonly reported risks were oversimplification, over-generalization, lower accuracy in response to complex questions, and lack of transparency regarding information sources. There are ethical concerns that overreliance/unsupervised reliance on gAI/LLMs could lead to the “humanizing” of these models and pose a risk to patient health equity, inclusiveness and data privacy. For these technologies to be truly transformative, they must become more transparent, have appropriate governance and monitoring, and incorporate feedback from healthcare professionals (HCPs), patients, and other experts. Uptake of these technologies will also need education and awareness among non-specialist audiences around their optimal use as sources of plain language medical information.Plain language summary: More and more people are using special computer programs called artificial intelligence (AI) or large language models (LLMs) to find and get medical facts in simple words they can understand. This can help people take better care of themselves, learn about their health, and talk with their doctors. We found that AI/LLMs generally provided correct and helpful information to people. However, there could also be a risk of incorrect or unreliable information in certain situations if the question is complex. This can cause harm to people if they use this information to make their own medical decisions. Also, gAI/LLMs provide human-like responses, which make people trust them more than they should. There could be a risk that people may share their medical information with AI/LLMs, which could get into the wrong hands. To make sure these programs really help people, they need to be clear about how they work, and they must have good rules to follow and take advice from doctors and patients to improve performance. People also need to be trained on how to best use these AI tools to find easy-to-understand and reliable medical information. It is important for doctors, patients and other health workers to help make sure the AI is producing reliable and understandable medical information for patients.Keywords: artificial intelligence, large language model, ethics, health literacy, plain language summaryhttps://www.dovepress.com/generative-aillms-for-plain-language-medical-information-for-patients--peer-reviewed-fulltext-article-PPAArtificial IntelligenceLarge Language ModelEthicsHealth LiteracyPlain Language Summary
spellingShingle Pal A
Wangmo T
Bharadia T
Ahmed-Richards M
Bhanderi MB
Kachhadiya R
Allemann SS
Elger BS
Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics
Patient Preference and Adherence
Artificial Intelligence
Large Language Model
Ethics
Health Literacy
Plain Language Summary
title Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics
title_full Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics
title_fullStr Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics
title_full_unstemmed Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics
title_short Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics
title_sort generative ai llms for plain language medical information for patients caregivers and general public opportunities risks and ethics
topic Artificial Intelligence
Large Language Model
Ethics
Health Literacy
Plain Language Summary
url https://www.dovepress.com/generative-aillms-for-plain-language-medical-information-for-patients--peer-reviewed-fulltext-article-PPA
work_keys_str_mv AT pala generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics
AT wangmot generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics
AT bharadiat generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics
AT ahmedrichardsm generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics
AT bhanderimb generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics
AT kachhadiyar generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics
AT allemannss generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics
AT elgerbs generativeaillmsforplainlanguagemedicalinformationforpatientscaregiversandgeneralpublicopportunitiesrisksandethics