Approximation Based Variance Reduction for Reparameterization Gradients
2020-07-29NeurIPS 2020Code Available0· sign in to hype
Tomas Geffner, Justin Domke
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/tomsons22/ABVRRpytorch★ 1
Abstract
Flexible variational distributions improve variational inference but are harder to optimize. In this work we present a control variate that is applicable for any reparameterizable distribution with known mean and covariance matrix, e.g. Gaussians with any covariance structure. The control variate is based on a quadratic approximation of the model, and its parameters are set using a double-descent scheme by minimizing the gradient estimator's variance. We empirically show that this control variate leads to large improvements in gradient variance and optimization convergence for inference with non-factorized variational distributions.