The Fine-Grained Hardness of Sparse Linear Regression
Aparna Gupte, Vinod Vaikuntanathan
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Sparse linear regression is the well-studied inference problem where one is given a design matrix A R^M N and a response vector b R^M, and the goal is to find a solution x R^N which is k-sparse (that is, it has at most k non-zero coordinates) and minimizes the prediction error \|A x - b\|_2. On the one hand, the problem is known to be NP-hard which tells us that no polynomial-time algorithm exists unless P = NP. On the other hand, the best known algorithms for the problem do a brute-force search among N^k possibilities. In this work, we show that there are no better-than-brute-force algorithms, assuming any one of a variety of popular conjectures including the weighted k-clique conjecture from the area of fine-grained complexity, or the hardness of the closest vector problem from the geometry of numbers. We also show the impossibility of better-than-brute-force algorithms when the prediction error is measured in other _p norms, assuming the strong exponential-time hypothesis.