SOTAVerified

Imagining the Latent Space of a Variational Auto-Encoders

2019-09-25Unverified0· sign in to hype

Zezhen Zeng, Jonathon Hare, Adam Prügel-Bennett

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space. This allows us to ''imagine'' the information captured in the latent space. We argue that this is necessary to make a VAE into a truly generative model. We use our GAN to visualise the latent space of a standard VAE and of a -VAE.

Tasks

Reproductions