SOTAVerified

Iterative VAE as a predictive brain model for out-of-distribution generalization

2020-12-01NeurIPS Workshop SVRHM 2020Unverified0· sign in to hype

Victor Boutin, Aimen Zerroug, Minju Jung, Thomas Serre

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Our ability to generalize beyond training data to novel, out-of-distribution, image degradations is a hallmark of primate vision. The predictive brain, exemplified by predictive coding networks (PCNs), has become a prominent neuroscience theory of neural computation. Motivated by the recent successes of variational autoencoders (VAEs) in machine learning, we rigorously derive a correspondence between PCNs and VAEs. This motivates us to consider iterative extensions of VAEs (iVAEs) as plausible variational extensions of the PCNs. We further demonstrate that iVAEs generalize to distributional shifts significantly better than both PCNs and VAEs. In addition, we propose a novel measure of recognizability for individual samples which can be tested against human psychophysical data. Overall, we hope this work will spur interest in iVAEs as a promising new direction for modeling in neuroscience.

Tasks

Reproductions