Sampling Generative Networks
Tom White
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/dribnet/platOfficialIn papernone★ 325
- github.com/ptrblck/prog_gans_pytorch_inferencepytorch★ 322
- github.com/amurthy1/dagantf★ 0
- github.com/linxi159/GAN-training-tricksnone★ 0
- github.com/michael13162/DoodleGANnone★ 0
- github.com/linxi159/Tips-and-tricks-to-train-GANsnone★ 0
- github.com/jaingaurav3/GAN-Hacksnone★ 0
- github.com/gitaar9/MLDAGANtf★ 0
- github.com/MindSpore-scientific-2/code-3/tree/main/stable-sammindspore★ 0
- github.com/hy-zpg/DAGANtf★ 0
Abstract
We introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model's prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors with data augmentation. Binary classification using attribute vectors is presented as a technique supporting quantitative analysis of the latent space. Most techniques are intended to be independent of model type and examples are shown on both Variational Autoencoders and Generative Adversarial Networks.