SOTAVerified

Overparameterized Neural Networks Implement Associative Memory

2019-09-26Code Available0· sign in to hype

Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. Empirically, we show that: (1) overparameterized autoencoders store training samples as attractors, and thus, iterating the learned map leads to sample recovery; (2) the same mechanism allows for encoding sequences of examples, and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.

Tasks

Reproductions