SOTAVerified

A Statistical Learning View of Simple Kriging

2022-02-15Code Available0· sign in to hype

Emilia Siviero, Emilie Chautru, Stephan Clémençon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In the Big Data era, with the ubiquity of geolocation sensors in particular, massive datasets exhibiting a possibly complex spatial dependence structure are becoming increasingly available. In this context, the standard probabilistic theory of statistical learning does not apply directly and guarantees of the generalization capacity of predictive rules learned from such data are left to establish. We analyze here the simple Kriging task from a statistical learning perspective, i.e. by carrying out a nonparametric finite-sample predictive analysis. Given d 1 values taken by a realization of a square integrable random field X= _s\_s S, S R^2, with unknown covariance structure, at sites s_1,\; ,\; s_d in S, the goal is to predict the unknown values it takes at any other location s S with minimum quadratic risk. The prediction rule being derived from a training spatial dataset: a single realization X' of X, independent from those to be predicted, observed at n 1 locations _1,\; ,\; _n in S. Despite the connection of this minimization problem with kernel ridge regression, establishing the generalization capacity of empirical risk minimizers is far from straightforward, due to the non independent and identically distributed nature of the training data X'__1,\; ,\; X'__n involved in the learning procedure. In this article, non-asymptotic bounds of order O_P(1/n) are proved for the excess risk of a plug-in predictive rule mimicking the true minimizer in the case of isotropic stationary Gaussian processes, observed at locations forming a regular grid in the learning stage. These theoretical results are illustrated by various numerical experiments, on simulated data and on real-world datasets.

Tasks

Reproductions