Computational-Statistical Tradeoffs from NP-hardness
Guy Blanc, Caleb Koch, Carmen Strassle, Li-Yang Tan
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
A central question in computer science and statistics is whether efficient algorithms can achieve the information-theoretic limits of statistical problems. Many computational-statistical tradeoffs have been shown under average-case assumptions, but since statistical problems are average-case in nature, it has been a challenge to base them on standard worst-case assumptions. In PAC learning where such tradeoffs were first studied, the question is whether computational efficiency can come at the cost of using more samples than information-theoretically necessary. We base such tradeoffs on NP-hardness and obtain: Sharp computational-statistical tradeoffs assuming NP requires exponential time: For every polynomial p(n), there is an n-variate class C with VC dimension 1 such that the sample complexity of time-efficiently learning C is (p(n)). A characterization of RP vs. NP in terms of learning: RP = NP iff every NP-enumerable class is learnable with O(VCdim(C)) samples in polynomial time. The forward implication has been known since (Pitt and Valiant, 1988); we prove the reverse implication. Notably, all our lower bounds hold against improper learners. These are the first NP-hardness results for improperly learning a subclass of polynomial-size circuits, circumventing formal barriers of Applebaum, Barak, and Xiao (2008).