arXiv Open Access 2025

Rethinking Early Stopping: Refine, Then Calibrate

Eugène Berta David Holzmüller Michael I. Jordan Francis Bach
Lihat Sumber

Abstrak

Machine learning classifiers often produce probabilistic predictions that are critical for accurate and interpretable decision-making in various domains. The quality of these predictions is generally evaluated with proper losses, such as cross-entropy, which decompose into two components: calibration error assesses general under/overconfidence, while refinement error measures the ability to distinguish different classes. In this paper, we present a novel variational formulation of the calibration-refinement decomposition that sheds new light on post-hoc calibration, and enables rapid estimation of the different terms. Equipped with this new perspective, we provide theoretical and empirical evidence that calibration and refinement errors are not minimized simultaneously during training. Selecting the best epoch based on validation loss thus leads to a compromise point that is suboptimal for both terms. To address this, we propose minimizing refinement error only during training (Refine,...), before minimizing calibration error post hoc, using standard techniques (...then Calibrate). Our method integrates seamlessly with any classifier and consistently improves performance across diverse classification tasks.

Topik & Kata Kunci

Penulis (4)

E

Eugène Berta

D

David Holzmüller

M

Michael I. Jordan

F

Francis Bach

Format Sitasi

Berta, E., Holzmüller, D., Jordan, M.I., Bach, F. (2025). Rethinking Early Stopping: Refine, Then Calibrate. https://arxiv.org/abs/2501.19195

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓