Mitigated deployment strategy for ethical AI in clinical settings
Clinical diagnostic tools can disadvantage subgroups due to poor model generalisability, which can be caused by unrepresentative training data. Practical deployment solutions to mitigate harm for subgroups from models with differential performance have yet to be established. This paper will build on...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMJ Publishing Group
2025-07-01
|
| Series: | BMJ Health & Care Informatics |
| Online Access: | https://informatics.bmj.com/content/32/1/e101363.full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Clinical diagnostic tools can disadvantage subgroups due to poor model generalisability, which can be caused by unrepresentative training data. Practical deployment solutions to mitigate harm for subgroups from models with differential performance have yet to be established. This paper will build on existing work that considers a selective deployment approach where poorly performing subgroups are excluded from deployments. Alternatively, the proposed ‘mitigated deployment’ strategy requires safety nets to be built into clinical workflows to safeguard under-represented groups in a universal deployment. This approach relies on human–artificial intelligence collaboration and postmarket evaluation to continually improve model performance across subgroups with real-world data. Using a real-world case study, the benefits and limitations of mitigated deployment are explored. This will add to the tools available to healthcare organisations when considering how to safely deploy models with differential performance across subgroups. |
|---|---|
| ISSN: | 2632-1009 |