Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions
2026-03-05Unverified0· sign in to hype
Yuwen Li, Guozhi Zhang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper examines the L_p and W^1_p norm approximation errors of ReLU neural networks for Korobov functions. In terms of network width and depth, we derive nearly optimal super-approximation error bounds of order 2m in the L_p norm and order 2m-2 in the W^1_p norm, for target functions with L_p mixed derivative of order m in each direction. The analysis leverages sparse grid finite elements and the bit extraction technique. Our results improve upon classical lowest order L_ and H^1 norm error bounds and demonstrate that the expressivity of neural networks is largely unaffected by the curse of dimensionality.