SOTAVerified

Image Generators with Conditionally-Independent Pixel Synthesis

2020-11-27CVPR 2021Code Available1· sign in to hype

Ivan Anokhin, Kirill Demochkin, Taras Khakhulin, Gleb Sterkin, Victor Lempitsky, Denis Korzhenkov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Existing image generator networks rely heavily on spatial convolutions and, optionally, self-attention blocks in order to gradually synthesize images in a coarse-to-fine manner. Here, we present a new architecture for image generators, where the color value at each pixel is computed independently given the value of a random latent vector and the coordinate of that pixel. No spatial convolutions or similar operations that propagate information across pixels are involved during the synthesis. We analyze the modeling capabilities of such generators when trained in an adversarial fashion, and observe the new generators to achieve similar generation quality to state-of-the-art convolutional generators. We also investigate several interesting properties unique to the new architecture.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
FFHQ 1024 x 1024CIPSFID10.07Unverified
FFHQ 256 x 256CIPSFID4.38Unverified
Landscapes 256 x 256CIPSFID3.61Unverified
LSUN Churches 256 x 256CIPSFID2.92Unverified
Satellite-Buildings 256 x 256CIPSFID69.67Unverified
Satellite-Landscapes 256 x 256CIPSFID48.47Unverified

Reproductions