SOTAVerified

Adversarial Latent Autoencoders

2020-04-09CVPR 2020Code Available2· sign in to hype

Stanislav Pidhorskyi, Donald Adjeroh, Gianfranco Doretto

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Autoencoder networks are unsupervised approaches aiming at combining generative and representational properties by learning simultaneously an encoder-generator map. Although studied extensively, the issues of whether they have the same generative power of GANs, or learn disentangled representations, have not been fully addressed. We introduce an autoencoder that tackles these issues jointly, which we call Adversarial Latent Autoencoder (ALAE). It is a general architecture that can leverage recent improvements on GAN training procedures. We designed two autoencoders: one based on a MLP encoder, and another based on a StyleGAN generator, which we call StyleALAE. We verify the disentanglement properties of both architectures. We show that StyleALAE can not only generate 1024x1024 face images with comparable quality of StyleGAN, but at the same resolution can also produce face reconstructions and manipulations based on real images. This makes ALAE the first autoencoder able to compare with, and go beyond the capabilities of a generator-only type of architecture.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CelebA 256x256StyleALAEFID19.21Unverified
FFHQ 1024 x 1024StyleALAEFID13.09Unverified

Reproductions