SOTAVerified

Depth Uncertainty in Neural Networks

2020-06-15NeurIPS 2020Code Available1· sign in to hype

Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks. Different depths correspond to subnetworks which share weights and whose predictions are combined via marginalisation, yielding model uncertainty. By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass. We validate our approach on real-world regression and image classification tasks. Our approach provides uncertainty calibration, robustness to dataset shift, and accuracies competitive with more computationally expensive baselines.

Tasks

Reproductions