SOTAVerified

Support recovery without incoherence: A case for nonconvex regularization

2014-12-17Unverified0· sign in to hype

Po-Ling Loh, Martin J. Wainwright

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and _-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and _-guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in _1-based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.

Tasks

Reproductions