SOTAVerified

Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems

2018-10-04Code Available0· sign in to hype

Shuaiwen Wang, Wenda Zhou, Arian Maleki, Haihao Lu, Vahab Mirrokni

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Consider the following class of learning schemes: equation eq:main-problem1 := C \; _j=1^n (x_j^ ; y_j) + R( ), (1) equation where x_i R^p and y_i R denote the i^ th feature and response variable respectively. Let and R be the convex loss function and regularizer, denote the unknown weights, and be a regularization parameter. C R^p is a closed convex set. Finding the optimal choice of is a challenging problem in high-dimensional regimes where both n and p are large. We propose three frameworks to obtain a computationally efficient approximation of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our three frameworks are based on the primal, dual, and proximal formulations of (1). Each framework shows its strength in certain types of problems. We prove the equivalence of the three approaches under smoothness conditions. This equivalence enables us to justify the accuracy of the three methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization, and support vector machines. We empirically demonstrate the effectiveness of our results for non-differentiable cases.

Tasks

Reproductions