Hyperspherical Variational Auto-Encoders
Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, Jakub M. Tomczak
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/nicola-decao/s-vae-pytorchOfficialIn paperpytorch★ 388
- github.com/nicola-decao/s-vae-tfOfficialIn papertf★ 0
- github.com/nicola-decao/s-vaeOfficialIn papertf★ 0
- github.com/clementchadebec/benchmark_VAEpytorch★ 1,983
- github.com/Chenxingyu1990/gzsl_svaepytorch★ 28
- github.com/Chenxingyu1990/A-Boundary-Based-Out-of-Distribution-Classifier-for-Generalized-Zero-Shot-Learningpytorch★ 28
- github.com/jiacheng-xu/vmf_vae_nlppytorch★ 0
- github.com/acr42/Neural-Variational-Knowledge-Graphstf★ 0
- github.com/alexanderimanicowenrivers/Neural-Variational-Knowledge-Graphstf★ 0
Abstract
The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or S-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, N-VAE, in low dimensions on other data types. Code at http://github.com/nicola-decao/s-vae-tf and https://github.com/nicola-decao/s-vae-pytorch
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Citeseer | S-VGAE | AUC | 94.7 | — | Unverified |
| Cora | S-VGAE | AUC | 94.1 | — | Unverified |
| Pubmed | S-VGAE | AUC | 96 | — | Unverified |