SOTAVerified

Prescribed Generative Adversarial Networks

2019-10-09Code Available1· sign in to hype

Adji B. Dieng, Francisco J. R. Ruiz, David M. Blei, Michalis K. Titsias

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generative adversarial networks (GANs) are a powerful approach to unsupervised learning. They have achieved state-of-the-art performance in the image domain. However, GANs are limited in two ways. They often learn distributions with low support---a phenomenon known as mode collapse---and they do not guarantee the existence of a probability density, which makes evaluating generalization using predictive log-likelihood impossible. In this paper, we develop the prescribed GAN (PresGAN) to address these shortcomings. PresGANs add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data distribution. Fitting PresGANs involves computing the intractable gradients of the entropy regularization term; PresGANs sidestep this intractability using unbiased stochastic estimates. We evaluate PresGANs on several datasets and found they mitigate mode collapse and generate samples with high perceptual quality. We further found that PresGANs reduce the gap in performance in terms of predictive log-likelihood between traditional GANs and variational autoencoders (VAEs).

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CelebA 128x128PresGANFID29.12Unverified
CIFAR-10PresGANFID52.2Unverified
MNISTPresGANFID38.53Unverified
Stacked MNISTPresGANFID23.97Unverified

Reproductions