SOTAVerified

Faster Eigenvector Computation via Shift-and-Invert Preconditioning

2016-05-26Unverified0· sign in to hype

Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix -- i.e. computing a unit vector x such that x^T x (1-)_1(): Offline Eigenvector Estimation: Given an explicit A R^n d with = A^TA, we show how to compute an approximate top eigenvector in time O([nnz(A) + d*sr(A)gap^2 ]* 1/ ) and O([nnz(A)^3/4 (d*sr(A))^1/4gap ] * 1/ ). Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, gap is the relative eigengap. By separating the gap dependence from the nnz(A) term, our first runtime improves upon the classical power and Lanczos methods. It also improves prior work using fast subspace embeddings [AC09, CW13] and stochastic optimization [Sha15c], giving significantly better dependencies on sr(A) and . Our second running time improves these further when nnz(A) d*sr(A)gap^2. Online Eigenvector Estimation: Given a distribution D with covariance matrix and a vector x_0 which is an O(gap) approximate top eigenvector for , we show how to refine to an approximation using O(var(D)gap*) samples from D. Here var(D) is a natural notion of variance. Combining our algorithm with previous work to initialize x_0, we obtain improved sample complexity and runtime results under a variety of assumptions on D. We achieve our results using a general framework that we believe is of independent interest. We give a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply fast stochastic variance reduced gradient (SVRG) based system solvers to achieve our claims.

Tasks

Reproductions