SOTAVerified

Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization

2022-05-26Code Available0· sign in to hype

Yifei Wang, Peng Chen, Mert Pilanci, Wuchen Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks. By solving the convex SDP, we obtain the optimal approximation of the Wasserstein gradient direction in this class of functions. Numerical experiments including PDE-constrained Bayesian inference and parameter estimation in COVID-19 modeling demonstrate the effectiveness of the proposed method.

Tasks

Reproductions