Learning to Tune XGBoost with XGBoost
2019-09-16Unverified0· sign in to hype
Johanna Sommer, Dimitrios Sarigiannis, Thomas Parnell
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In this short paper we investigate whether meta-learning techniques can be used to more effectively tune the hyperparameters of machine learning models using successive halving (SH). We propose a novel variant of the SH algorithm (MeSH), that uses meta-regressors to determine which candidate configurations should be eliminated at each round. We apply MeSH to the problem of tuning the hyperparameters of a gradient-boosted decision tree model. By training and tuning our meta-regressors using existing tuning jobs from 95 datasets, we demonstrate that MeSH can often find a superior solution to both SH and random search.