Unrolled Generative Adversarial Networks
2016-11-07Code Available1· sign in to hype
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/poolio/unrolled_ganOfficialIn papertf★ 0
- github.com/mangoubi/Min-max-optimization-algorithm-for-training-GANstf★ 11
- github.com/andrewliao11/unrolled-ganspytorch★ 0
- github.com/locuslab/gradient_regularized_gantf★ 0
- github.com/apaszke/pytorch-distpytorch★ 0
- github.com/chameleonTK/continual-learning-for-HARpytorch★ 0
- github.com/lyken17/pytorchpytorch★ 0
- github.com/MarisaKirisame/unroll_ganpytorch★ 0
- github.com/alex98chen/testGANtf★ 0
Abstract
We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.