The need for guardrails with large language models in pharmacovigilance and other medical safety critical settings
Abstract Large language models (LLMs) are useful tools with the capacity for performing specific types of knowledge work at an effective scale. However, LLM deployments in high-risk and safety-critical domains pose unique challenges, notably the issue of “hallucinations”, where LLMs can generate fab...
Saved in:
| Main Authors: | Joe B. Hakim, Jeffery L. Painter, Darmendra Ramcharran, Vijay Kara, Greg Powell, Paulina Sobczak, Chiho Sato, Andrew Bate, Andrew Beam |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-09138-0 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Perspective review: Will generative AI make common data models obsolete in future analyses of distributed data networks?
by: Jeffery L. Painter, et al.
Published: (2025-04-01) -
Artificial Intelligence in Healthcare Literacy: Promise, Gaps, and Guardrails
by: Diogo Medina
Published: (2025-06-01) -
A Study of the Deflections of Metal Road Guardrail Post
by: Olegas Prentkovskis, et al.
Published: (2010-06-01) -
Design and Mechanical Behavior Research of Highway Guardrail Patrol Robot
by: Hong Chang, et al.
Published: (2025-02-01) -
Overview of the Patents and Patent Applications on Upper Guardrail Protection Systems for Motorcyclists
by: Laura Brigita Parežnik, et al.
Published: (2025-06-01)