Approximation of functions with one-bit neural networks
C. Sinan Güntürk, Weilin Li
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The celebrated universal approximation theorems for neural networks roughly state that any reasonable function can be arbitrarily well-approximated by a network whose parameters are appropriately chosen real numbers. This paper examines the approximation capabilities of one-bit neural networks -- those whose nonzero parameters are a for some fixed a=0. One of our main theorems shows that for any f C^s([0,1]^d) with \|f\|_<1 and error , there is a f_NN such that |f(x)-f_NN(x)| for all x away from the boundary of [0,1]^d, and f_NN is either implementable by a \ 1\ quadratic network with O(^-2d/s) parameters or a \ 1 2 \ ReLU network with O(^-2d/s (1/)) parameters, as 0. We establish new approximation results for iterated multivariate Bernstein operators, error estimates for noise-shaping quantization on the Bernstein basis, and novel implementation of the Bernstein polynomials by one-bit quadratic and ReLU neural networks.