SOTAVerified

Optimizing Approximate Leave-one-out Cross-validation to Tune Hyperparameters

2020-11-20Code Available1· sign in to hype

Ryan Burn

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

For a large class of regularized models, leave-one-out cross-validation can be efficiently estimated with an approximate leave-one-out formula (ALO). We consider the problem of adjusting hyperparameters so as to optimize ALO. We derive efficient formulas to compute the gradient and hessian of ALO and show how to apply a second-order optimizer to find hyperparameters. We demonstrate the usefulness of the proposed approach by finding hyperparameters for regularized logistic regression and ridge regression on various real-world data sets.

Tasks

Reproductions