De-Biasing The Lasso With Degrees-of-Freedom Adjustment
Pierre C. Bellec, Cun-Hui Zhang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper studies schemes to de-bias the Lasso in a linear model y=X+ where the goal is to construct confidence intervals for a_0^T in a direction a_0, where X has iid N(0,) rows. We show that previously analyzed propositions to de-bias the Lasso require a modification in order to enjoy efficiency in a full range of sparsity. This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by Lasso. Let s_0 be the true sparsity. If is known and the ideal score vector proportional to X^-1a_0 is used, the unadjusted de-biasing schemes proposed previously enjoy efficiency if s_0 n^2/3. However, if s_0 n^2/3, the unadjusted schemes cannot be efficient in certain a_0: then it is necessary to modify existing procedures by a degrees-of-freedom adjustment. This modification grants asymptotic efficiency for any a_0 when s_0/p 0 and s_0(p/s_0)/n 0. If is unknown, efficiency is granted for general a_0 when where s_=\|^-1a_0\|_0, provided that the de-biased estimate is modified with the degrees-of-freedom adjustment. The dependence in s_0,s_ and \|^-1a_0\|_1 is optimal. Our estimated score vector provides a novel methodology to handle dense a_0. Our analysis shows that the degrees-of-freedom adjustment is not needed when the initial bias in direction a_0 is small, which is granted under stringent conditions on ^-1. The main proof argument is an interpolation path similar to that typically used to derive Slepian's lemma. It yields a new _ error bound for the Lasso which is of independent interest.