Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model
Abstract Sepsis accounts for a substantial portion of global deaths and healthcare costs. The objective of this reproducibility study is to validate Duke Health’s Sepsis Watch ML model, in a new community healthcare setting and assess its performance and clinical utility in early sepsis detection at...
Saved in:
| Main Authors: | , , , , , , , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01664-5 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850218535876296704 |
|---|---|
| author | Bruno Valan Anusha Prakash William Ratliff Michael Gao Srikanth Muthya Ajit Thomas Jennifer L. Eaton Matt Gardner Marshall Nichols Mike Revoir Dustin Tart Cara O’Brien Manesh Patel Suresh Balu Mark Sendak |
| author_facet | Bruno Valan Anusha Prakash William Ratliff Michael Gao Srikanth Muthya Ajit Thomas Jennifer L. Eaton Matt Gardner Marshall Nichols Mike Revoir Dustin Tart Cara O’Brien Manesh Patel Suresh Balu Mark Sendak |
| author_sort | Bruno Valan |
| collection | DOAJ |
| description | Abstract Sepsis accounts for a substantial portion of global deaths and healthcare costs. The objective of this reproducibility study is to validate Duke Health’s Sepsis Watch ML model, in a new community healthcare setting and assess its performance and clinical utility in early sepsis detection at Summa Health’s emergency departments. The study analyzed the model’s ability to predict sepsis using a combination of static and dynamic patient data using 205,005 encounters between 2020 and 2021 from 101,584 unique patients. 54.7% (n = 112,223) patients were female and the average age was 50 (IQR [38,71]). The AUROC ranged from 0.906 to 0.960, and the AUPRC ranged from 0.177 to 0.252 across the four sites. Ultimately, the reproducibility of the Sepsis Watch model in a community health system setting confirmed its strong and robust performance and portability across different geographical and demographic contexts with little variation. |
| format | Article |
| id | doaj-art-8201502c15d045d7abbb23ab561b87ac |
| institution | OA Journals |
| issn | 2398-6352 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | npj Digital Medicine |
| spelling | doaj-art-8201502c15d045d7abbb23ab561b87ac2025-08-20T02:07:41ZengNature Portfolionpj Digital Medicine2398-63522025-06-018111010.1038/s41746-025-01664-5Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning modelBruno Valan0Anusha Prakash1William Ratliff2Michael Gao3Srikanth Muthya4Ajit Thomas5Jennifer L. Eaton6Matt Gardner7Marshall Nichols8Mike Revoir9Dustin Tart10Cara O’Brien11Manesh Patel12Suresh Balu13Mark Sendak14Duke Institute for Health InnovationDuke Institute for Health InnovationDuke Institute for Health InnovationDuke Institute for Health InnovationCohere Med IncCohere Med IncSumma Health Research & InnovationDuke Institute for Health InnovationDuke Institute for Health InnovationDuke Institute for Health InnovationDuke University HospitalDuke University HospitalDepartment of Medicine, Duke University School of MedicineDuke Institute for Health InnovationDuke Institute for Health InnovationAbstract Sepsis accounts for a substantial portion of global deaths and healthcare costs. The objective of this reproducibility study is to validate Duke Health’s Sepsis Watch ML model, in a new community healthcare setting and assess its performance and clinical utility in early sepsis detection at Summa Health’s emergency departments. The study analyzed the model’s ability to predict sepsis using a combination of static and dynamic patient data using 205,005 encounters between 2020 and 2021 from 101,584 unique patients. 54.7% (n = 112,223) patients were female and the average age was 50 (IQR [38,71]). The AUROC ranged from 0.906 to 0.960, and the AUPRC ranged from 0.177 to 0.252 across the four sites. Ultimately, the reproducibility of the Sepsis Watch model in a community health system setting confirmed its strong and robust performance and portability across different geographical and demographic contexts with little variation.https://doi.org/10.1038/s41746-025-01664-5 |
| spellingShingle | Bruno Valan Anusha Prakash William Ratliff Michael Gao Srikanth Muthya Ajit Thomas Jennifer L. Eaton Matt Gardner Marshall Nichols Mike Revoir Dustin Tart Cara O’Brien Manesh Patel Suresh Balu Mark Sendak Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model npj Digital Medicine |
| title | Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model |
| title_full | Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model |
| title_fullStr | Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model |
| title_full_unstemmed | Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model |
| title_short | Evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model |
| title_sort | evaluating sepsis watch generalizability through multisite external validation of a sepsis machine learning model |
| url | https://doi.org/10.1038/s41746-025-01664-5 |
| work_keys_str_mv | AT brunovalan evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT anushaprakash evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT williamratliff evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT michaelgao evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT srikanthmuthya evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT ajitthomas evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT jenniferleaton evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT mattgardner evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT marshallnichols evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT mikerevoir evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT dustintart evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT caraobrien evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT maneshpatel evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT sureshbalu evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel AT marksendak evaluatingsepsiswatchgeneralizabilitythroughmultisiteexternalvalidationofasepsismachinelearningmodel |