SOTAVerified

Neural Posterior Regularization for Likelihood-Free Inference

2021-02-15Code Available0· sign in to hype

Dongjun Kim, Kyungwoo Song, Seungjae Shin, Wanmo Kang, Il-Chul Moon, Weonyoung Joo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

A simulation is useful when the phenomenon of interest is either expensive to regenerate or irreproducible with the same context. Recently, Bayesian inference on the distribution of the simulation input parameter has been implemented sequentially to minimize the required simulation budget for the task of simulation validation to the real-world. However, the Bayesian inference is still challenging when the ground-truth posterior is multi-modal with a high-dimensional simulation output. This paper introduces a regularization technique, namely Neural Posterior Regularization (NPR), which enforces the model to explore the input parameter space effectively. Afterward, we provide the closed-form solution of the regularized optimization that enables analyzing the effect of the regularization. We empirically validate that NPR attains the statistically significant gain on benchmark performances for diverse simulation tasks.

Tasks

Reproductions