On Contrastive Learning for Likelihood-free Inference
Conor Durkan, Iain Murray, George Papamakarios
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/conormdurkan/lfiOfficialIn paperpytorch★ 39
- github.com/stephengreen/lfi-gwpytorch★ 42
- github.com/AI-HPC-Research-Team/GWpytorch★ 10
- github.com/bkmi/cnrepytorch★ 2
- github.com/mackelab/nflows-noforkpytorch★ 0
Abstract
Likelihood-free methods perform parameter inference in stochastic simulator models where evaluating the likelihood is intractable but sampling synthetic data is possible. One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples generated using the simulator and pairs sampled from some reference distribution, which implicitly learns a density ratio proportional to the likelihood. Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators for this task. In this work, we show that both of these approaches can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.