SOTAVerified

On approximating f with neural networks

2019-10-28Unverified0· sign in to hype

Saeed Saremi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Consider a feedforward neural network : R^d R^d such that f, where f:R^d R is a smooth function, therefore must satisfy _j _i = _i _j pointwise. We prove a theorem that a network with more than one hidden layer can only represent one feature in its first hidden layer; this is a dramatic departure from the well-known results for one hidden layer. The proof of the theorem is straightforward, where two backward paths and a weight-tying matrix play the key roles. We then present the alternative, the implicit parametrization, where the neural network is : R^d R and f; in addition, a "soft analysis" of gives a dual perspective on the theorem. Throughout, we come back to recent probabilistic models that are formulated as f, and conclude with a critique of denoising autoencoders.

Tasks

Reproductions