SOTAVerified

Sharpened Error Bounds for Random Sampling Based _2 Regression

2014-03-30Unverified0· sign in to hype

Shusen Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Given a data matrix X R^n d and a response vector y R^n, suppose n>d, it costs O(n d^2) time and O(n d) space to solve the least squares regression (LSR) problem. When n and d are both large, exactly solving the LSR problem is very expensive. When n d, one feasible approach to speeding up LSR is to randomly embed y and all columns of X into a smaller subspace R^c; the induced LSR problem has the same number of columns but much fewer number of rows, and it can be solved in O(c d^2) time and O(c d) space. We discuss in this paper two random sampling based methods for solving LSR more efficiently. Previous work showed that the leverage scores based sampling based LSR achieves 1+ accuracy when c O(d ^-2 d). In this paper we sharpen this error bound, showing that c = O(d d + d ^-1) is enough for achieving 1+ accuracy. We also show that when c O( d ^-2 d), the uniform sampling based LSR attains a 2+ bound with positive probability.

Tasks

Reproductions