Learning Network Parameters in the ReLU Model
2019-09-14NeurIPS Workshop Deep_Invers 2019Unverified0· sign in to hype
Arya Mazumdar, Ankit Singh Rawat
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Rectified linear units, or ReLUs, have become a preferred activation function for artificial neural networks. In this paper we consider the problem of learning a generative model in the presence of nonlinearity (modeled by the ReLU functions). Given a set of signal vectors y^i R^d, i =1, 2, , n, we aim to learn the network parameters, i.e., the d k matrix A, under the model y^i = ReLU(Ac^i +b), where b R^d is a random bias vector, and c^i R^k are arbitrary unknown latent vectors. We show that it is possible to recover the column space of A within an error of O(d) (in Frobenius norm) under certain conditions on the distribution of b.