SOTAVerified

Towards Uncertainties in Deep Learning that Are Accurate and Calibrated

2021-09-29Unverified0· sign in to hype

Volodymyr Kuleshov, Shachi Deshpande

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Predictive uncertainties can be characterized by two properties---calibration and sharpness. This paper introduces algorithms that ensure the calibration of any model while maintaining sharpness. They apply in both classification and regression and guarantee the strong property of distribution calibration, while being simpler and more broadly applicable than previous methods (especially in the context of neural networks, which are often miscalibrated). Importantly, these algorithms achieve a long-standing statistical principle that forecasts should maximize sharpness subject to being fully calibrated. Using our algorithms, machine learning models can under some assumptions be calibrated without sacrificing accuracy: in a sense, calibration can be a free lunch. Empirically, we find that our methods improve predictive uncertainties on several tasks with minimal computational and implementation overhead.

Tasks

Reproductions