Rates of Convergence for Regression with the Graph Poly-Laplacian
Nicolás García Trillos, Ryan Murray, Matthew Thorpe
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularisation. Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser. The methodology is readily adapted to graphs and here we consider graph poly-Laplacian regularisation in a fully supervised, non-parametric, noise corrupted, regression problem. In particular, given a dataset _i\_i=1^n and a set of noisy labels _i\_i=1^nR we let u_n: _i\_i=1^nR be the minimiser of an energy which consists of a data fidelity term and an appropriately scaled graph poly-Laplacian term. When y_i = g(x_i)+_i, for iid noise _i, and using the geometric random graph, we identify (with high probability) the rate of convergence of u_n to g in the large data limit n. Furthermore, our rate, up to logarithms, coincides with the known rate of convergence in the usual smoothing spline model.