Computing Approximate _p Sensitivities
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Recent works in dimensionality reduction for regression tasks have introduced the notion of sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering provable guarantees on the quality of the approximation after removing low-sensitivity datapoints via subsampling. However, fast algorithms for approximating _p sensitivities, which we show is equivalent to approximate _p regression, are known for only the _2 setting, in which they are termed leverage scores. In this work, we provide efficient algorithms for approximating _p sensitivities and related summary statistics of a given matrix. In particular, for a given n d matrix, we compute -approximation to its _1 sensitivities at the cost of O(n/) sensitivity computations. For estimating the total _p sensitivity (i.e. the sum of _p sensitivities), we provide an algorithm based on importance sampling of _p Lewis weights, which computes a constant factor approximation to the total sensitivity at the cost of roughly O(d) sensitivity computations. Furthermore, we estimate the maximum _1 sensitivity, up to a d factor, using O(d) sensitivity computations. We generalize all these results to _p norms for p > 1. Lastly, we experimentally show that for a wide class of matrices in real-world datasets, the total sensitivity can be quickly approximated and is significantly smaller than the theoretical prediction, demonstrating that real-world datasets have low intrinsic effective dimensionality.