SOTAVerified

PVAE: Learning Disentangled Representations with Intrinsic Dimension via Approximated L0 Regularization

2019-11-15NeurIPS Workshop DC_S2 2019Code Available0· sign in to hype

Anonymous

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Many models based on the Variational Autoencoder are proposed to achieve disentangled latent variables in inference. However, most current work is focusing on designing powerful disentangling regularizers, while the given number of dimensions for the latent representation at initialization could severely influence the disentanglement. Thus, a pruning mechanism is introduced, aiming at automatically seeking for the intrinsic dimension of the data while promoting disentangled representations. The proposed method is validated on MPI3D and MNIST to be advancing state-of-the-art methods in disentanglement, reconstruction, and robustness. The code is provided on the https://github.com/WeyShi/FYP-of-Disentanglement.

Tasks

Reproductions