Wasserstein Auto-Encoders
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Schoelkopf
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/tolstikhin/waeOfficialIn papertf★ 0
- github.com/boschresearch/unscented-autoencoderpytorch★ 10
- github.com/vitskvara/GenerativeModels.jlnone★ 1
- github.com/schelotto/Wasserstein-AutoEncoderspytorch★ 0
- github.com/sedelmeyer/wasserstein-auto-encoderpytorch★ 0
- github.com/vitskvara/GenModels.jlnone★ 0
- github.com/eifuentes/swae-pytorchpytorch★ 0
- github.com/pravn/wasserstein_autoencoderspytorch★ 0
- github.com/allnightlight/ConditionalWassersteinAutoencoderPoweredBySinkhornDistancenone★ 0
- github.com/mitscha/dplcpytorch★ 0
Abstract
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.