Identifying and mitigating algorithmic bias in the safety net

Abstract Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record...

Full description

Saved in:
Bibliographic Details
Main Authors: Shaina Mackin, Vincent J. Major, Rumi Chunara, Remle Newton-Dame
Format: Article
Language:English
Published: Nature Portfolio 2025-06-01
Series:npj Digital Medicine
Online Access:https://doi.org/10.1038/s41746-025-01732-w
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record: one predicting acute visits for asthma and one predicting unplanned readmissions. We evaluated differences in subgroup performance across race/ethnicity, sex, language, and insurance using equal opportunity difference (EOD), a metric comparing false negative rates. The most biased classes (race/ethnicity for asthma, insurance for readmission) were targeted for mitigation using threshold adjustment, which adjusts subgroup thresholds to minimize EOD, and reject option classification, which re-classifies scores near the threshold by subgroup. Successful mitigation was defined as 1) absolute subgroup EODs <5 percentage points, 2) accuracy reduction <10%, and 3) alert rate change <20%. Threshold adjustment met these criteria; reject option classification did not. We introduce a Supplementary Playbook outlining our approach for low-resource bias mitigation.
ISSN:2398-6352