BEGAN: Boundary Equilibrium Generative Adversarial Networks
David Berthelot, Thomas Schumm, Luke Metz
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Heumi/BEGAN-tensorflowtf★ 0
- github.com/davidismael/BEGANtf★ 0
- github.com/artcg/BEGANtf★ 0
- github.com/vbnmzxc9513/GAN_BEGAN_hw2pytorch★ 0
- github.com/mlvc-lab/BeGan_pytorchpytorch★ 0
- github.com/carpedm20/BEGAN-tensorflowtf★ 0
- github.com/timsainb/GAIAtf★ 0
- github.com/taey16/pix2pixBEGAN.pytorchpytorch★ 0
- github.com/evan11401/CS_IOC5008_0856043_HW2pytorch★ 0
- github.com/consequencesunintended/BEGANtf★ 0
Abstract
We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.