A prediction rigidity formalism for low-cost uncertainties in trained neural networks

Quantifying the uncertainty of regression models is essential to ensure their reliability, particularly since their application often extends beyond their training domain. Based on the solution of a constrained optimization problem, this work proposes ‘prediction rigidities’ as a formalism to obtain...

Full description

Saved in:
Bibliographic Details
Main Authors: Filippo Bigi, Sanggyu Chong, Michele Ceriotti, Federico Grasselli
Format: Article
Language:English
Published: IOP Publishing 2024-01-01
Series:Machine Learning: Science and Technology
Subjects:
Online Access:https://doi.org/10.1088/2632-2153/ad805f
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Quantifying the uncertainty of regression models is essential to ensure their reliability, particularly since their application often extends beyond their training domain. Based on the solution of a constrained optimization problem, this work proposes ‘prediction rigidities’ as a formalism to obtain uncertainties of arbitrary pre-trained regressors. A clear connection between the suggested framework and Bayesian inference is established, and a last-layer approximation is developed and rigorously justified to enable the application of the method to neural networks. This extension affords cheap uncertainties without any modification to the neural network itself or its training procedure. The effectiveness of this approach is shown for a wide range of regression tasks, ranging from simple toy models to applications in chemistry and meteorology.
ISSN:2632-2153