SOTAVerified

Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function

2019-07-21Unverified0· sign in to hype

Stephen Odaibo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In Bayesian machine learning, the posterior distribution is typically computationally intractable, hence variational inference is often required. In this approach, an evidence lower bound on the log likelihood of data is maximized during training. Variational Autoencoders (VAE) are one important example where variational inference is utilized. In this tutorial, we derive the variational lower bound loss function of the standard variational autoencoder. We do so in the instance of a gaussian latent prior and gaussian approximate posterior, under which assumptions the Kullback-Leibler term in the variational lower bound has a closed form solution. We derive essentially everything we use along the way; everything from Bayes' theorem to the Kullback-Leibler divergence.

Tasks

Reproductions