SOTAVerified

ViTGAN: Training GANs with Vision Transformers

2021-07-09ICLR 2022Code Available1· sign in to hype

Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, Vision Transformers (ViTs) have shown competitive performance on image recognition while requiring less vision-specific inductive biases. In this paper, we investigate if such performance can be extended to image generation. To this end, we integrate the ViT architecture into generative adversarial networks (GANs). For ViT discriminators, we observe that existing regularization methods for GANs interact poorly with self-attention, causing serious instability during training. To resolve this issue, we introduce several novel regularization techniques for training GANs with ViTs. For ViT generators, we examine architectural choices for latent and pixel mapping layers to facilitate convergence. Empirically, our approach, named ViTGAN, achieves comparable performance to the leading CNN-based GAN models on three datasets: CIFAR-10, CelebA, and LSUN bedroom.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10ViTGANFID6.66Unverified

Reproductions