SOTAVerified

Pseudo-Bayesian Learning via Direct Loss Minimization with Applications to Sparse Gaussian Process Models

2019-10-16pproximateinference AABI Symposium 2019Unverified0· sign in to hype

Rishit Sheth, Roni Khardon

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior. Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior. Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods.

Tasks

Reproductions