SOTAVerified

Optimizing for Generalization in Machine Learning with Cross-Validation Gradients

2018-05-18Code Available0· sign in to hype

Shane Barratt, Rishi Sharma

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Cross-validation is the workhorse of modern applied statistics and machine learning, as it provides a principled framework for selecting the model that maximizes generalization performance. In this paper, we show that the cross-validation risk is differentiable with respect to the hyperparameters and training data for many common machine learning algorithms, including logistic regression, elastic-net regression, and support vector machines. Leveraging this property of differentiability, we propose a cross-validation gradient method (CVGM) for hyperparameter optimization. Our method enables efficient optimization in high-dimensional hyperparameter spaces of the cross-validation risk, the best surrogate of the true generalization ability of our learning algorithm.

Tasks

Reproductions