Iterative Refinement for _p-norm Regression
Deeksha Adil, Rasmus Kyng, Richard Peng, Sushant Sachdeva
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We give improved algorithms for the _p-regression problem, _x \|x\|_p such that A x=b, for all p (1,2) (2,). Our algorithms obtain a high accuracy solution in O_p(m^|p-2|2p + |p-2|) O_p(m^13) iterations, where each iteration requires solving an m m linear system, m being the dimension of the ambient space. By maintaining an approximate inverse of the linear systems that we solve in each iteration, we give algorithms for solving _p-regression to 1 / poly(n) accuracy that run in time O_p(m^\, 7/3\), where is the matrix multiplication constant. For the current best value of > 2.37, we can thus solve _p regression as fast as _2 regression, for all constant p bounded away from 1. Our algorithms can be combined with fast graph Laplacian linear equation solvers to give minimum _p-norm flow / voltage solutions to 1 / poly(n) accuracy on an undirected graph with m edges in O_p(m^1 + |p-2|2p + |p-2|) O_p(m^43) time. For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve on the p-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for _p-norms, using the smoothed _p-norms introduced in the work of Bubeck et al. Given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed _p norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.