Empirically derived evaluation requirements for responsible deployments of AI in safety-critical settings
Abstract Processes to assure the safe, effective, and responsible deployment of artificial intelligence (AI) in safety-critical settings are urgently needed. Here we show a procedure to empirically evaluate the impacts of AI augmentation as a basis for responsible deployment. We evaluated three augm...
Saved in:
| Main Authors: | Dane A. Morey, Michael F. Rayo, David D. Woods |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01784-y |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Mitigated deployment strategy for ethical AI in clinical settings
by: Sahar Abdulrahman, et al.
Published: (2025-07-01) -
Empirical Investigation of Critical Requirements Engineering Practices for Global Software Development
by: Habib Ullah Khan, et al.
Published: (2021-01-01) -
A requirements model for AI algorithms in functional safety-critical systems with an explainable self-enforcing network from a developer perspective
by: Klüver Christina, et al.
Published: (2024-01-01) -
Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment
by: Olga Shvetsova, et al.
Published: (2025-06-01) -
Methods of Deployment and Evaluation of FPGA as a Service Under Conditions of Changing Requirements and Environments
by: Artem Perepelitsyn, et al.
Published: (2025-06-01)