SOTAVerified

Benefiting Deep Latent Variable Models via Learning the Prior and Removing Latent Regularization

2020-07-07Unverified0· sign in to hype

Rogan Morrow, Wei-Chen Chiu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

There exist many forms of deep latent variable models, such as the variational autoencoder and adversarial autoencoder. Regardless of the specific class of model, there exists an implicit consensus that the latent distribution should be regularized towards the prior, even in the case where the prior distribution is learned. Upon investigating the effect of latent regularization on image generation our results indicate that in the case where a sufficiently expressive prior is learned, latent regularization is not necessary and may in fact be harmful insofar as image quality is concerned. We additionally investigate the benefit of learned priors on two common problems in computer vision: latent variable disentanglement, and diversity in image-to-image translation.

Tasks

Reproductions