SOTAVerified

Scalable Bayesian neural networks by layer-wise input augmentation

2020-10-26Code Available0· sign in to hype

Trung Trinh, Samuel Kaski, Markus Heinonen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce implicit Bayesian neural networks, a simple and scalable approach for uncertainty representation in deep learning. Standard Bayesian approach to deep learning requires the impractical inference of the posterior distribution over millions of parameters. Instead, we propose to induce a distribution that captures the uncertainty over neural networks by augmenting each layer's inputs with latent variables. We present appropriate input distributions and demonstrate state-of-the-art performance in terms of calibration, robustness and uncertainty characterisation over large-scale, multi-million parameter image classification tasks.

Tasks

Reproductions