Autoencoding beyond pixels using a learned similarity metric
Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, Ole Winther
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/LynnHo/AttGAN-Tensorflowtf★ 616
- github.com/manicman1999/Sword-GAN32none★ 14
- github.com/pravn/vaeganpytorch★ 10
- github.com/oadonca/ANVAEtf★ 6
- github.com/leoHeidel/vae-gan-tf2tf★ 6
- github.com/AlexanderBogatko/TensorFlow_Keras_VAEGANtf★ 5
- github.com/gm3g11/VAE_GAN_pytorchpytorch★ 0
- github.com/adrienchaton/BERGANpytorch★ 0
- github.com/Ram81/AC-VAEGAN-PyTorchpytorch★ 0
- github.com/MariaPdg/fmri-reconstructionpytorch★ 0
Abstract
We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.