Uncertainty quantification for neural network potential foundation models

Abstract For neural network potentials (NNPs) to gain widespread use, researchers must be able to trust model outputs. However, the blackbox nature of neural networks and their inherent stochasticity are often deterrents, especially for foundation models trained over broad swaths of chemical space....

Full description

Saved in:
Bibliographic Details
Main Authors: Jenna A. Bilbrey, Jesun S. Firoz, Mal-Soon Lee, Sutanay Choudhury
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:npj Computational Materials
Online Access:https://doi.org/10.1038/s41524-025-01572-y
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849709658788331520
author Jenna A. Bilbrey
Jesun S. Firoz
Mal-Soon Lee
Sutanay Choudhury
author_facet Jenna A. Bilbrey
Jesun S. Firoz
Mal-Soon Lee
Sutanay Choudhury
author_sort Jenna A. Bilbrey
collection DOAJ
description Abstract For neural network potentials (NNPs) to gain widespread use, researchers must be able to trust model outputs. However, the blackbox nature of neural networks and their inherent stochasticity are often deterrents, especially for foundation models trained over broad swaths of chemical space. Uncertainty information provided at the time of prediction can help reduce aversion to NNPs. In this work, we detail two uncertainty quantification (UQ) methods. Readout ensembling, by finetuning the readout layers of an ensemble of foundation models, provides information about model uncertainty, while quantile regression, by replacing point predictions with distributional predictions, provides information about uncertainty within the underlying training data. We demonstrate our approach with the MACE-MP-0 model, applying UQ to the foundation model and a series of finetuned models. The uncertainties produced by the readout ensemble and quantile methods are demonstrated to be distinct measures by which the quality of the NNP output can be judged.
format Article
id doaj-art-2e531f44a0b74b3fae4f02196a1b6a74
institution DOAJ
issn 2057-3960
language English
publishDate 2025-04-01
publisher Nature Portfolio
record_format Article
series npj Computational Materials
spelling doaj-art-2e531f44a0b74b3fae4f02196a1b6a742025-08-20T03:15:12ZengNature Portfolionpj Computational Materials2057-39602025-04-011111810.1038/s41524-025-01572-yUncertainty quantification for neural network potential foundation modelsJenna A. Bilbrey0Jesun S. Firoz1Mal-Soon Lee2Sutanay Choudhury3AI & Data Analytics, Pacific Northwest National LaboratoryAdvanced Computing, Mathematics, & Data, Pacific Northwest National LaboratoryChemical Physics & Analysis, Pacific Northwest National LaboratoryAdvanced Computing, Mathematics, & Data, Pacific Northwest National LaboratoryAbstract For neural network potentials (NNPs) to gain widespread use, researchers must be able to trust model outputs. However, the blackbox nature of neural networks and their inherent stochasticity are often deterrents, especially for foundation models trained over broad swaths of chemical space. Uncertainty information provided at the time of prediction can help reduce aversion to NNPs. In this work, we detail two uncertainty quantification (UQ) methods. Readout ensembling, by finetuning the readout layers of an ensemble of foundation models, provides information about model uncertainty, while quantile regression, by replacing point predictions with distributional predictions, provides information about uncertainty within the underlying training data. We demonstrate our approach with the MACE-MP-0 model, applying UQ to the foundation model and a series of finetuned models. The uncertainties produced by the readout ensemble and quantile methods are demonstrated to be distinct measures by which the quality of the NNP output can be judged.https://doi.org/10.1038/s41524-025-01572-y
spellingShingle Jenna A. Bilbrey
Jesun S. Firoz
Mal-Soon Lee
Sutanay Choudhury
Uncertainty quantification for neural network potential foundation models
npj Computational Materials
title Uncertainty quantification for neural network potential foundation models
title_full Uncertainty quantification for neural network potential foundation models
title_fullStr Uncertainty quantification for neural network potential foundation models
title_full_unstemmed Uncertainty quantification for neural network potential foundation models
title_short Uncertainty quantification for neural network potential foundation models
title_sort uncertainty quantification for neural network potential foundation models
url https://doi.org/10.1038/s41524-025-01572-y
work_keys_str_mv AT jennaabilbrey uncertaintyquantificationforneuralnetworkpotentialfoundationmodels
AT jesunsfiroz uncertaintyquantificationforneuralnetworkpotentialfoundationmodels
AT malsoonlee uncertaintyquantificationforneuralnetworkpotentialfoundationmodels
AT sutanaychoudhury uncertaintyquantificationforneuralnetworkpotentialfoundationmodels