SOTAVerified

Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training

2021-11-01NeurIPS 2021Code Available0· sign in to hype

Minguk Kang, Woohyeon Shim, Minsu Cho, Jaesik Park

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN. While one of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN), it is widely known that training ACGAN is challenging as the number of classes in the dataset increases. ACGAN also tends to generate easily classifiable samples with a lack of diversity. In this paper, we introduce two cures for ACGAN. First, we identify that gradient exploding in the classifier can cause an undesirable collapse in early training, and projecting input vectors onto a unit hypersphere can resolve the problem. Second, we propose the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in the class-labeled dataset. On this foundation, we propose the Rebooted Auxiliary Classifier Generative Adversarial Network (ReACGAN). The experimental results show that ReACGAN achieves state-of-the-art generation results on CIFAR10, Tiny-ImageNet, CUB200, and ImageNet datasets. We also verify that ReACGAN benefits from differentiable augmentations and that D2D-CE harmonizes with StyleGAN2 architecture. Model weights and a software package that provides implementations of representative cGANs and all experiments in our paper are available at https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ArtBench-10 (32x32)ReACGAN + DiffAugFID3.18Unverified
CIFAR-10StyleGAN2 + DiffAugment + D2D-CEFID2.26Unverified
ImageNet 128x128ReACGANFID8.21Unverified

Reproductions