SOTAVerified

Meta-Learning for Variational Inference

2019-09-25pproximateinference AABI Symposium 2019Unverified0· sign in to hype

Ruqi Zhang, Yingzhen Li, Chris De Sa, Sam Devlin, Cheng Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and general applicability. Crucial to the performance of VI is the selection of the divergence measure in the optimization objective, as it affects the properties of the approximate posterior significantly. In this paper, we propose a meta-learning algorithm to learn (i) the divergence measure suited for the task of interest to automate the design of the VI method; and (ii) initialization of the variational parameters, which reduces the number of VI optimization steps drastically. We demonstrate the learned divergence outperforms the hand-designed divergence on Gaussian mixture distribution approximation, Bayesian neural network regression, and partial variational autoencoder based recommender systems.

Tasks

Reproductions