SOTAVerified

On the Power of Truncated SVD for General High-rank Matrix Estimation Problems

2017-02-22NeurIPS 2017Unverified0· sign in to hype

Simon S. Du, Yining Wang, Aarti Singh

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We show that given an estimate A that is close to a general high-rank positive semi-definite (PSD) matrix A in spectral norm (i.e., \|A-A\|_2 ), the simple truncated SVD of A produces a multiplicative approximation of A in Frobenius norm. This observation leads to many interesting results on general high-rank matrix estimation problems, which we briefly summarize below (A is an n n high-rank PSD matrix and A_k is the best rank-k approximation of A): (1) High-rank matrix completion: By observing (n\^-4,k^2\_0^2\|A\|_F^2 n_k+1(A)^2) elements of A where _k+1(A) is the (k+1)-th singular value of A and _0 is the incoherence, the truncated SVD on a zero-filled matrix satisfies \|A_k-A\|_F (1+O())\|A-A_k\|_F with high probability. (2)High-rank matrix de-noising: Let A=A+E where E is a Gaussian random noise matrix with zero mean and ^2/n variance on each entry. Then the truncated SVD of A satisfies \|A_k-A\|_F (1+O(/_k+1(A)))\|A-A_k\|_F + O(k). (3) Low-rank Estimation of high-dimensional covariance: Given N i.i.d.~samples X_1,,X_N N_n(0,A), can we estimate A with a relative-error Frobenius norm bound? We show that if N = (n\^-4,k^2\_k(A)^2 N) for _k(A)=_1(A)/_k+1(A), then \|A_k-A\|_F (1+O())\|A-A_k\|_F with high probability, where A=1N_i=1^NX_iX_i^ is the sample covariance.

Tasks

Reproductions