SOTAVerified

Generative Adversarial Transformers

2021-03-01Code Available2· sign in to hype

Drew A. Hudson, C. Lawrence Zitnick

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce the GANformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linear efficiency, that can readily scale to high-resolution synthesis. It iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network. We demonstrate the model's strength and robustness through a careful evaluation over a range of datasets, from simulated multi-object environments to rich real-world indoor and outdoor scenes, showing it achieves state-of-the-art results in terms of image quality and diversity, while enjoying fast learning and better data-efficiency. Further qualitative and quantitative experiments offer us an insight into the model's inner workings, revealing improved interpretability and stronger disentanglement, and illustrating the benefits and efficacy of our approach. An implementation of the model is available at https://github.com/dorarad/gansformer.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CityscapesVQGANFID-10k-training-steps173.8Unverified
CityscapesSAGANFID-10k-training-steps12.81Unverified
CityscapesGANFID-10k-training-steps11.57Unverified
CityscapesStyleGAN2FID-10k-training-steps8.35Unverified
CityscapesGANformerFID-10k-training-steps5.76Unverified
CLEVRGANformerFID-5k-training-steps9.17Unverified
CLEVRStyleGAN2FID-5k-training-steps16.05Unverified
CLEVRGANFID-5k-training-steps25.02Unverified
CLEVRSAGANFID-5k-training-steps26.04Unverified
CLEVRVQGANFID-5k-training-steps32.6Unverified
FFHQVQGANFID-10k-training-steps63.12Unverified
FFHQSAGANFID-10k-training-steps16.21Unverified
FFHQGANFID-10k-training-steps13.18Unverified
FFHQGANsformerFID-10k-training-steps12.85Unverified
FFHQStyleGAN2Clean-FID (70k)2.98Unverified
FFHQStyleGAN2FID2.84Unverified
FFHQ 256 x 256GANFormerFID7.42Unverified
LSUN Bedroom 256 x 256GANformerFID-10k-training-steps6.51Unverified
LSUN Bedroom 256 x 256StyleGAN2FID-10k-training-steps11.53Unverified
LSUN Bedroom 256 x 256GANFID-10k-training-steps12.16Unverified
LSUN Bedroom 256 x 256SAGANFID-10k-training-steps14.06Unverified
LSUN Bedroom 256 x 256VQGANFID-10k-training-steps59.63Unverified

Reproductions