Gradient Origin Networks
Sam Bond-Taylor, Chris G. Willcocks
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/cwkx/GONOfficialIn paperpytorch★ 160
- github.com/kklemon/gon-pytorchpytorch★ 16
- github.com/titu1994/tf_GONtf★ 12
- github.com/BariscanBozkurt/Gradient-Origin-Networks-Juliapytorch★ 0
Abstract
This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder. This is achieved using empirical Bayes to calculate the expectation of the posterior, which is implemented by initialising a latent vector with zeros, then using the gradient of the log-likelihood of the data with respect to this zero vector as new latent points. The approach has similar characteristics to autoencoders, but with a simpler architecture, and is demonstrated in a variational autoencoder equivalent that permits sampling. This also allows implicit representation networks to learn a space of implicit functions without requiring a hypernetwork, retaining their representation advantages across datasets. The experiments show that the proposed method converges faster, with significantly lower reconstruction error than autoencoders, while requiring half the parameters.