SOTAVerified

Function approximation by deep neural networks with parameters \0, 12, 1, 2\

2021-03-15Unverified0· sign in to hype

Aleksandr Beknazaryan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper it is shown that C_-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters \0, 12, 1, 2\. The l_0 and l_1 parameter norms of considered networks are thus equivalent. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in [-1,1]. In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in [-1,1].

Tasks

Reproductions