SOTAVerified

Regularizing Generative Adversarial Networks under Limited Data

2021-04-07CVPR 2021Code Available1· sign in to hype

Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, Weilong Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate that the proposed regularization scheme 1) improves the generalization performance and stabilizes the learning dynamics of GAN models under limited training data, and 2) complements the recent data augmentation methods. These properties facilitate training GAN models to achieve state-of-the-art performance when only limited training data of the ImageNet benchmark is available.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
25% ImageNet 128x128LeCAM + DAFID11.16Unverified
CAT 256x256StyleGAN2 + DA + RLC (Ours)FID10.16Unverified
CIFAR-10LeCAM (BigGAN + DA)FID8.46Unverified
CIFAR-100LeCAM (BigGAN + DA)FID11.2Unverified
CIFAR-100LeCAM (StyleGAN2 + ADA)FID2.99Unverified
FFHQ 256 x 256LeCAM (StyleGAN2 + ADA)FID3.49Unverified
ImageNet 128x128LeCAM + DAFID6.54Unverified

Reproductions