SOTAVerified

An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks

2021-06-14NeurIPS 2021Unverified0· sign in to hype

Shashank Rajput, Kartik Sreenivasan, Dimitris Papailiopoulos, Amin Karbasi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

It is well known that modern deep neural networks are powerful enough to memorize datasets even when the labels have been randomized. Recently, Vershynin (2020) settled a long standing question by Baum (1988), proving that deep threshold networks can memorize n points in d dimensions using O(e^1/^2+n) neurons and O(e^1/^2(d+n)+n) weights, where is the minimum distance between the points. In this work, we improve the dependence on from exponential to almost linear, proving that O(1+n) neurons and O(d+n) weights are sufficient. Our construction uses Gaussian random weights only in the first layer, while all the subsequent layers use binary or integer weights. We also prove new lower bounds by connecting memorization in neural networks to the purely geometric problem of separating n points on a sphere using hyperplanes.

Tasks

Reproductions