SOTAVerified

Kolmogorov-Arnold Networks: Approximation and Learning Guarantees for Functions and their Derivatives

2025-04-21Unverified0· sign in to hype

Anastasis Kratsios, Takashi Furuya

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Inspired by the Kolmogorov-Arnold superposition theorem, Kolmogorov-Arnold Networks (KANs) have recently emerged as an improved backbone for most deep learning frameworks, promising more adaptivity than their multilayer perception (MLP) predecessor by allowing for trainable spline-based activation functions. In this paper, we probe the theoretical foundations of the KAN architecture by showing that it can optimally approximate any Besov function in B^s_p,q(X) on a bounded open, or even fractal, domain X in R^d at the optimal approximation rate with respect to any weaker Besov norm B^_p,q(X); where < s. We complement our approximation guarantee with a dimension-free estimate on the sample complexity of a residual KAN model when learning a function of Besov regularity from N i.i.d. noiseless samples. Our KAN architecture incorporates contemporary deep learning wisdom by leveraging residual/skip connections between layers.

Tasks

Reproductions