A Random Matrix Approach to Neural Networks
Cosme Louart, Zhenyu Liao, Romain Couillet
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Zhenyu-LIAO/RMT4ELMOfficialIn papernone★ 0
Abstract
This article studies the Gram random matrix model G=1T^ T, =(WX), classically found in the analysis of random feature maps and random neural networks, where X=[x_1,,x_T] R^p T is a (data) matrix of bounded norm, W R^n p is a matrix of independent zero-mean unit variance entries, and : R R is a Lipschitz continuous (activation) function --- (WX) being understood entry-wise. By means of a key concentration of measure lemma arising from non-asymptotic random matrix arguments, we prove that, as n,p,T grow large at the same rate, the resolvent Q=(G+ I_T)^-1, for >0, has a similar behavior as that met in sample covariance matrix models, involving notably the moment =Tn E[G], which provides in passing a deterministic equivalent for the empirical spectral measure of G. Application-wise, this result enables the estimation of the asymptotic performance of single-layer random neural networks. This in turn provides practical insights into the underlying mechanisms into play in random neural networks, entailing several unexpected consequences, as well as a fast practical means to tune the network hyperparameters.