Refined α-Divergence Variational Inference via Rejection Sampling
Rahul Sharma, Abhishek Kumar, Piyush Rai
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We present an approximate inference method, based on a synergistic combination of R\'enyi -divergence variational inference (RDVI) and rejection sampling (RS). RDVI is based on minimization of R\'enyi -divergence D_(p||q) between the true distribution p(x) and a variational approximation q(x); RS draws samples from a distribution p(x) = p(x)/Z_p using a proposal q(x), s.t. Mq(x) p(x), x. Our inference method is based on a crucial observation that D_(p||q) equals M() where M() is the optimal value of the RS constant for a given proposal q_(x). This enables us to develop a two-stage hybrid inference algorithm. Stage-1 performs RDVI to learn q_ by minimizing an estimator of D_(p||q), and uses the learned q_ to find an (approximately) optimal M(). Stage-2 performs RS using the constant M() to improve the approximate distribution q_ and obtain a sample-based approximation. We prove that this two-stage method allows us to learn considerably more accurate approximations of the target distribution as compared to RDVI. We demonstrate our method's efficacy via several experiments on synthetic and real datasets.