Generative modeling with one recursive network
Benjamin Lincoln Brimacombe
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We propose to train a multilayer perceptron simultaneously as an encoder and a decoder in order to create a high quality generative model. In one call a network is optimized as either an encoder or decoder, and in a second recursive call the network uses its own outputs to learn the remaining corresponding function, allowing for the minimization of popular statistical divergence measures over a single feed-forward function. This new approach derives from a simple reformulation of variational bayes and extends naturally to the domain of Generative Adversarial Nets. Here we demonstrate a single network which learns a generative model via an adversarial minimax game played against itself. Experiments demonstrate comparable efficacy for the single-network approach versus corresponding multi-network formulations.