Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.

Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs...

Full description

Saved in:
Bibliographic Details
Main Authors: Malte Blattmann, Adrian Lindenmeyer, Stefan Franke, Thomas Neumuth, Daniel Schneider
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-07-01
Series:PLOS Digital Health
Online Access:https://doi.org/10.1371/journal.pdig.0000801
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849336317364666368
author Malte Blattmann
Adrian Lindenmeyer
Stefan Franke
Thomas Neumuth
Daniel Schneider
author_facet Malte Blattmann
Adrian Lindenmeyer
Stefan Franke
Thomas Neumuth
Daniel Schneider
author_sort Malte Blattmann
collection DOAJ
description Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model's function. In this work, we compare three such methods on the task of predicting prostate cancer-specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks-exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors-such as spectral-normalized neural Gaussian processes (SNGP)-provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.
format Article
id doaj-art-03da22f268ed435ba06f6c99c034aa63
institution Kabale University
issn 2767-3170
language English
publishDate 2025-07-01
publisher Public Library of Science (PLoS)
record_format Article
series PLOS Digital Health
spelling doaj-art-03da22f268ed435ba06f6c99c034aa632025-08-20T03:44:59ZengPublic Library of Science (PLoS)PLOS Digital Health2767-31702025-07-0147e000080110.1371/journal.pdig.0000801Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.Malte BlattmannAdrian LindenmeyerStefan FrankeThomas NeumuthDaniel SchneiderDeep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model's function. In this work, we compare three such methods on the task of predicting prostate cancer-specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks-exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors-such as spectral-normalized neural Gaussian processes (SNGP)-provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.https://doi.org/10.1371/journal.pdig.0000801
spellingShingle Malte Blattmann
Adrian Lindenmeyer
Stefan Franke
Thomas Neumuth
Daniel Schneider
Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.
PLOS Digital Health
title Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.
title_full Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.
title_fullStr Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.
title_full_unstemmed Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.
title_short Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.
title_sort implicit versus explicit bayesian priors for epistemic uncertainty estimation in clinical decision support
url https://doi.org/10.1371/journal.pdig.0000801
work_keys_str_mv AT malteblattmann implicitversusexplicitbayesianpriorsforepistemicuncertaintyestimationinclinicaldecisionsupport
AT adrianlindenmeyer implicitversusexplicitbayesianpriorsforepistemicuncertaintyestimationinclinicaldecisionsupport
AT stefanfranke implicitversusexplicitbayesianpriorsforepistemicuncertaintyestimationinclinicaldecisionsupport
AT thomasneumuth implicitversusexplicitbayesianpriorsforepistemicuncertaintyestimationinclinicaldecisionsupport
AT danielschneider implicitversusexplicitbayesianpriorsforepistemicuncertaintyestimationinclinicaldecisionsupport