Sublinear Time Numerical Linear Algebra for Structured Matrices
Xiaofei Shi, David P. Woodruff
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We show how to solve a number of problems in numerical linear algebra, such as least squares regression, _p-regression for any p 1, low rank approximation, and kernel regression, in time T(A) ((nd)), where for a given input matrix A R^n d, T(A) is the time needed to compute A y for an arbitrary vector y R^d. Since T(A) O((A)), where (A) denotes the number of non-zero entries of A, the time is no worse, up to polylogarithmic factors, as all of the recent advances for such problems that run in input-sparsity time. However, for many applications, T(A) can be much smaller than (A), yielding significantly sublinear time algorithms. For example, in the overconstrained (1+)-approximate polynomial interpolation problem, A is a Vandermonde matrix and T(A) = O(n n); in this case our running time is n ( n) + (d/) and we recover the results of avron2013sketching as a special case. For overconstrained autoregression, which is a common problem arising in dynamical systems, T(A) = O(n n), and we immediately obtain n ( n) + (d/) time. For kernel autoregression, we significantly improve the running time of prior algorithms for general kernels. For the important case of autoregression with the polynomial kernel and arbitrary target vector bR^n, we obtain even faster algorithms. Our algorithms show that, perhaps surprisingly, most of these optimization problems do not require much more time than that of a polylogarithmic number of matrix-vector multiplications.