Variational Inference with Tail-adaptive f-Divergence
Dilin Wang, Hao liu, Qiang Liu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/dilinwang820/adaptive-f-divergenceOfficialIn papertf★ 0
Abstract
Variational inference with -divergences has been widely used in modern probabilistic machine learning. Compared to Kullback-Leibler (KL) divergence, a major advantage of using -divergences (with positive values) is their mass-covering property. However, estimating and optimizing -divergences require to use importance sampling, which could have extremely large or infinite variances due to heavy tails of importance weights. In this paper, we propose a new class of tail-adaptive f-divergences that adaptively change the convex function f with the tail of the importance weights, in a way that theoretically guarantees finite moments, while simultaneously achieving mass-covering properties. We test our methods on Bayesian neural networks, as well as deep reinforcement learning in which our method is applied to improve a recent soft actor-critic (SAC) algorithm. Our results show that our approach yields significant advantages compared with existing methods based on classical KL and -divergences.