SOTAVerified

Anycost GANs for Interactive Image Synthesis and Editing

2021-03-04CVPR 2021Code Available1· sign in to hype

Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zhu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generative adversarial networks (GANs) have enabled photorealistic image synthesis and editing. However, due to the high computational cost of large-scale generators (e.g., StyleGAN2), it usually takes seconds to see the results of a single edit on edge devices, prohibiting interactive user experience. In this paper, we take inspirations from modern rendering software and propose Anycost GAN for interactive natural image editing. We train the Anycost GAN to support elastic resolutions and channels for faster image generation at versatile speeds. Running subsets of the full generator produce outputs that are perceptually similar to the full generator, making them a good proxy for preview. By using sampling-based multi-resolution training, adaptive-channel training, and a generator-conditioned discriminator, the anycost generator can be evaluated at various configurations while achieving better image quality compared to separately trained models. Furthermore, we develop new encoder training and latent code optimization techniques to encourage consistency between the different sub-generators during image projection. Anycost GAN can be executed at various cost budgets (up to 10x computation reduction) and adapt to a wide range of hardware and latency requirements. When deployed on desktop CPUs and edge devices, our model can provide perceptually similar previews at 6-12x speedup, enabling interactive image editing. The code and demo are publicly available: https://github.com/mit-han-lab/anycost-gan.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
FFHQAnycost GANFID2.77Unverified
FFHQ 128 x 128Anycost GANFID3.98Unverified
FFHQ 256 x 256Anycost GANFID3.35Unverified
FFHQ 512 x 512Anycost GANFID3.08Unverified

Reproductions