SOTAVerified

Generalization and Stability of GANs: A theory and promise from data augmentation

2021-01-01Unverified0· sign in to hype

Khoat Than, Nghia Vu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The instability when training generative adversarial networks (GANs) is a notoriously difficult issue, and the generalization of GANs remains open. In this paper, we will analyze various sources of instability which not only come from the discriminator but also the generator. We then point out that the requirement of Lipschitz continuity on both the discriminator and generator leads to generalization and stability for GANs. As a consequence, this work naturally provides a generalization bound for a large class of existing models and explains the success of recent large-scale generators. Finally, we show why data augmentation can ensure Lipschitz continuity on both the discriminator and generator. This work therefore provides a theoretical basis for a simple way to ensure generalization in GANs, explaining the highly successful use of data augmentation for GANs in practice.

Tasks

Reproductions