SOTAVerified

Causally Disentangled Generative Variational AutoEncoder

2023-02-23Code Available0· sign in to hype

SeungHwan An, Kyungwoo Song, Jong-June Jeon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a new supervised learning technique for the Variational AutoEncoder (VAE) that allows it to learn a causally disentangled representation and generate causally disentangled outcomes simultaneously. We call this approach Causally Disentangled Generation (CDG). CDG is a generative model that accurately decodes an output based on a causally disentangled representation. Our research demonstrates that adding supervised regularization to the encoder alone is insufficient for achieving a generative model with CDG, even for a simple task. Therefore, we explore the necessary and sufficient conditions for achieving CDG within a specific model. Additionally, we introduce a universal metric for evaluating the causal disentanglement of a generative model. Empirical results from both image and tabular datasets support our findings.

Tasks

Reproductions