arXiv Open Access 2024

A prediction rigidity formalism for low-cost uncertainties in trained neural networks

Filippo Bigi Sanggyu Chong Michele Ceriotti Federico Grasselli
Lihat Sumber

Abstrak

Regression methods are fundamental for scientific and technological applications. However, fitted models can be highly unreliable outside of their training domain, and hence the quantification of their uncertainty is crucial in many of their applications. Based on the solution of a constrained optimization problem, we propose "prediction rigidities" as a method to obtain uncertainties of arbitrary pre-trained regressors. We establish a strong connection between our framework and Bayesian inference, and we develop a last-layer approximation that allows the new method to be applied to neural networks. This extension affords cheap uncertainties without any modification to the neural network itself or its training procedure. We show the effectiveness of our method on a wide range of regression tasks, ranging from simple toy models to applications in chemistry and meteorology.

Topik & Kata Kunci

Penulis (4)

F

Filippo Bigi

S

Sanggyu Chong

M

Michele Ceriotti

F

Federico Grasselli

Format Sitasi

Bigi, F., Chong, S., Ceriotti, M., Grasselli, F. (2024). A prediction rigidity formalism for low-cost uncertainties in trained neural networks. https://arxiv.org/abs/2403.02251

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓