Identifying and mitigating algorithmic bias in the safety net
Abstract Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record...
Saved in:
| Main Authors: | Shaina Mackin, Vincent J. Major, Rumi Chunara, Remle Newton-Dame |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01732-w |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Post-processing methods for mitigating algorithmic bias in healthcare classification models: An extended umbrella review
by: Shaina Mackin, et al.
Published: (2025-08-01) -
Connecting the uninsured to care: Engaging new primary care patients at a new York City safety net system
by: Caroline Cooke, et al.
Published: (2025-03-01) -
Metrics and Algorithms for Identifying and Mitigating Bias in AI Design: A Counterfactual Fairness Approach
by: Dongsoo Moon, et al.
Published: (2025-01-01) -
On Identifying and Mitigating Bias in the Estimation of the COVID-19 Case Fatality Rate
by: Michael I. Jordan
Published: (2020-06-01) -
On the Potential of Algorithm Fusion for Demographic Bias Mitigation in Face Recognition
by: Jascha Kolberg, et al.
Published: (2024-01-01)