Kernel Methods are Competitive for Operator Learning
Pau Batlle, Matthieu Darcy, Bamdad Hosseini, Houman Owhadi
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/matthieudarcy/kernelsoperatorlearningOfficialIn papernone★ 13
Abstract
We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Net (DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.]. We consider the setting where the input/output spaces of target operator G^\,:\, U V are reproducing kernel Hilbert spaces (RKHS), the data comes in the form of partial observations (u_i), (v_i) of input/output functions v_i=G^(u_i) (i=1,,N), and the measurement operators \,:\, U R^n and \,:\, V R^m are linear. Writing \,:\, R^n U and \,:\, R^m V for the optimal recovery maps associated with and , we approximate G^ with G= f where f is an optimal recovery approximation of f^:= G^ \,:\,R^n R^m. We show that, even when using vanilla kernels (e.g., linear or Mat\'ern), our approach is competitive in terms of cost-accuracy trade-off and either matches or beats the performance of NN methods on a majority of benchmarks. Additionally, our framework offers several advantages inherited from kernel methods: simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification. As such, it can serve as a natural benchmark for operator learning.