SOTAVerified

Variance reduction in stochastic methods for large-scale regularised least-squares problems

2021-10-15Code Available0· sign in to hype

Yusuf Pilavci, Pierre-Olivier Amblard, Simon Barthelmé, Nicolas Tremblay

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large dimensional least-squares and regularised least-squares problems are expensive to solve. There exist many approximate techniques, some deterministic (like conjugate gradient), some stochastic (like stochastic gradient descent). Among the latter, a new class of techniques uses Determinantal Point Processes (DPPs) to produce unbiased estimators of the solution. In particular, they can be used to perform Tikhonov regularization on graphs using random spanning forests, a specific DPP. While the unbiasedness of these algorithms is attractive, their variance can be high. We show here that variance can be reduced by combining the stochastic estimator with a deterministic gradient-descent step, while keeping the property of unbiasedness. We apply this technique to Tikhonov regularization on graphs, where the reduction in variance is found to be substantial at very small extra cost.

Tasks

Reproductions