SOTAVerified

Practical Transfer Learning for Bayesian Optimization

2018-02-06Code Available0· sign in to hype

Matthias Feurer, Benjamin Letham, Frank Hutter, Eytan Bakshy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

When hyperparameter optimization of a machine learning algorithm is repeated for multiple datasets it is possible to transfer knowledge to an optimization run on a new dataset. We develop a new hyperparameter-free ensemble model for Bayesian optimization that is a generalization of two existing transfer learning extensions to Bayesian optimization and establish a worst-case bound compared to vanilla Bayesian optimization. Using a large collection of hyperparameter optimization benchmark problems, we demonstrate that our contributions substantially reduce optimization time compared to standard Gaussian process-based Bayesian optimization and improve over the current state-of-the-art for transfer hyperparameter optimization.

Tasks

Reproductions