The need for guardrails with large language models in pharmacovigilance and other medical safety critical settings

Abstract Large language models (LLMs) are useful tools with the capacity for performing specific types of knowledge work at an effective scale. However, LLM deployments in high-risk and safety-critical domains pose unique challenges, notably the issue of “hallucinations”, where LLMs can generate fab...

Full description

Saved in:
Bibliographic Details
Main Authors: Joe B. Hakim, Jeffery L. Painter, Darmendra Ramcharran, Vijay Kara, Greg Powell, Paulina Sobczak, Chiho Sato, Andrew Bate, Andrew Beam
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-09138-0
Tags: Add Tag
No Tags, Be the first to tag this record!

Similar Items