Isolating Sources of Disentanglement in Variational Autoencoders
Ricky T. Q. Chen, Xuechen Li, Roger Grosse, David Duvenaud
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/rtqichen/beta-tcvaeOfficialIn paperpytorch★ 0
- github.com/ema-marconato/glancenetpytorch★ 25
- github.com/voxmenthe/beta_tcvae_v1pytorch★ 0
- github.com/voxmenthe/beta-tcvae_v1pytorch★ 0
- github.com/mcharrak/discreteVAEtf★ 0
- github.com/suvalaki/Deepertf★ 0
- github.com/tianyu-lu/tcvaepytorch★ 0
Abstract
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our -TCVAE (Total Correlation Variational Autoencoder), a refinement of the state-of-the-art -VAE objective for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the latent variables model is trained using our framework.