Identifying and mitigating algorithmic bias in the safety net
Abstract Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01732-w |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850136392711012352 |
|---|---|
| author | Shaina Mackin Vincent J. Major Rumi Chunara Remle Newton-Dame |
| author_facet | Shaina Mackin Vincent J. Major Rumi Chunara Remle Newton-Dame |
| author_sort | Shaina Mackin |
| collection | DOAJ |
| description | Abstract Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record: one predicting acute visits for asthma and one predicting unplanned readmissions. We evaluated differences in subgroup performance across race/ethnicity, sex, language, and insurance using equal opportunity difference (EOD), a metric comparing false negative rates. The most biased classes (race/ethnicity for asthma, insurance for readmission) were targeted for mitigation using threshold adjustment, which adjusts subgroup thresholds to minimize EOD, and reject option classification, which re-classifies scores near the threshold by subgroup. Successful mitigation was defined as 1) absolute subgroup EODs <5 percentage points, 2) accuracy reduction <10%, and 3) alert rate change <20%. Threshold adjustment met these criteria; reject option classification did not. We introduce a Supplementary Playbook outlining our approach for low-resource bias mitigation. |
| format | Article |
| id | doaj-art-31ff2d587af94fdaa9bff56cf20fd208 |
| institution | OA Journals |
| issn | 2398-6352 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | npj Digital Medicine |
| spelling | doaj-art-31ff2d587af94fdaa9bff56cf20fd2082025-08-20T02:31:09ZengNature Portfolionpj Digital Medicine2398-63522025-06-018111110.1038/s41746-025-01732-wIdentifying and mitigating algorithmic bias in the safety netShaina Mackin0Vincent J. Major1Rumi Chunara2Remle Newton-Dame3Office of Population Health, New York City Health + HospitalsDepartment of Population Health, NYU Grossman School of MedicineCenter for Health Data Science, New York UniversityOffice of Population Health, New York City Health + HospitalsAbstract Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record: one predicting acute visits for asthma and one predicting unplanned readmissions. We evaluated differences in subgroup performance across race/ethnicity, sex, language, and insurance using equal opportunity difference (EOD), a metric comparing false negative rates. The most biased classes (race/ethnicity for asthma, insurance for readmission) were targeted for mitigation using threshold adjustment, which adjusts subgroup thresholds to minimize EOD, and reject option classification, which re-classifies scores near the threshold by subgroup. Successful mitigation was defined as 1) absolute subgroup EODs <5 percentage points, 2) accuracy reduction <10%, and 3) alert rate change <20%. Threshold adjustment met these criteria; reject option classification did not. We introduce a Supplementary Playbook outlining our approach for low-resource bias mitigation.https://doi.org/10.1038/s41746-025-01732-w |
| spellingShingle | Shaina Mackin Vincent J. Major Rumi Chunara Remle Newton-Dame Identifying and mitigating algorithmic bias in the safety net npj Digital Medicine |
| title | Identifying and mitigating algorithmic bias in the safety net |
| title_full | Identifying and mitigating algorithmic bias in the safety net |
| title_fullStr | Identifying and mitigating algorithmic bias in the safety net |
| title_full_unstemmed | Identifying and mitigating algorithmic bias in the safety net |
| title_short | Identifying and mitigating algorithmic bias in the safety net |
| title_sort | identifying and mitigating algorithmic bias in the safety net |
| url | https://doi.org/10.1038/s41746-025-01732-w |
| work_keys_str_mv | AT shainamackin identifyingandmitigatingalgorithmicbiasinthesafetynet AT vincentjmajor identifyingandmitigatingalgorithmicbiasinthesafetynet AT rumichunara identifyingandmitigatingalgorithmicbiasinthesafetynet AT remlenewtondame identifyingandmitigatingalgorithmicbiasinthesafetynet |