SOTAVerified

Doubly Stochastic Adversarial Autoencoder

2018-07-19ICLR 2018Unverified0· sign in to hype

Mahdi Azarafrooz

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Any autoencoder network can be turned into a generative model by imposing an arbitrary prior distribution on its hidden code vector. Variational Autoencoder (VAE) [2] uses a KL divergence penalty to impose the prior, whereas Adversarial Autoencoder (AAE) [1] uses generative adversarial networks GAN [3]. GAN trades the complexities of sampling algorithms with the complexities of searching Nash equilibrium in minimax games. Such minimax architectures get trained with the help of data examples and gradients flowing through a generator and an adversary. A straightforward modification of AAE is to replace the adversary with the maximum mean discrepancy (MMD) test [4-5]. This replacement leads to a new type of probabilistic autoencoder, which is also discussed in our paper. We propose a novel probabilistic autoencoder in which the adversary of AAE is replaced with a space of stochastic functions. This replacement introduces a new source of randomness, which can be considered as a continuous control for encouraging explorations. This prevents the adversary from fitting too closely to the generator and therefore leads to a more diverse set of generated samples.

Tasks

Reproductions