SOTAVerified

Probabilistic Deep Learning with Generalised Variational Inference

2021-11-22pproximateinference AABI Symposium 2022Unverified0· sign in to hype

Giorgos Felekis, Theo Damoulas, Brooks Paige

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study probabilistic Deep Learning methods through the lens of Approximate Bayesian Inference. In particular, we examine Bayesian Neural Networks (BNNs), which usually suffer from multiple ill-posed assumptions such as prior and likelihood misspecification. In this direction, we investigate a recently proposed approximate inference framework called Generalised Variational Inference (GVI) in comparison to state-of-the-art methods including standard Variational Inference, Monte-Carlo Dropout, Stochastic gradient Langevin dynamics and Deep Ensembles. Also, we expand the original research around GVI by exploring a broader set of model architectures and mathematical settings on both real and synthetic data. Our experiments demonstrate that approximate posterior distributions derived from such a method offer attractive properties with respect to uncertainty quantification, prior specification robustness and predictive performance, especially in the case of BNNs.

Tasks

Reproductions