The need for guardrails with large language models in pharmacovigilance and other medical safety critical settings
Abstract Large language models (LLMs) are useful tools with the capacity for performing specific types of knowledge work at an effective scale. However, LLM deployments in high-risk and safety-critical domains pose unique challenges, notably the issue of “hallucinations”, where LLMs can generate fab...
Saved in:
| Main Authors: | , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-09138-0 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|