SOTAVerified

Discrete Simulation Optimization for Tuning Machine Learning Method Hyperparameters

2022-01-16Unverified0· sign in to hype

Varun Ramamohan, Shobhit Singhal, Aditya Raj Gupta, Nomesh Bhojkumar Bolia

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Machine learning (ML) methods are used in most technical areas such as image recognition, product recommendation, financial analysis, medical diagnosis, and predictive maintenance. An important aspect of implementing ML methods involves controlling the learning process for the ML method so as to maximize the performance of the method under consideration. Hyperparameter tuning is the process of selecting a suitable set of ML method parameters that control its learning process. In this work, we demonstrate the use of discrete simulation optimization methods such as ranking and selection (R&S) and random search for identifying a hyperparameter set that maximizes the performance of a ML method. Specifically, we use the KN R&S method and the stochastic ruler random search method and one of its variations for this purpose. We also construct the theoretical basis for applying the KN method, which determines the optimal solution with a statistical guarantee via solution space enumeration. In comparison, the stochastic ruler method asymptotically converges to global optima and incurs smaller computational overheads. We demonstrate the application of these methods to a wide variety of machine learning models, including deep neural network models used for time series prediction and image classification. We benchmark our application of these methods with state-of-the-art hyperparameter optimization libraries such as hyperopt and mango. The KN method consistently outperforms hyperopt's random search (RS) and Tree of Parzen Estimators (TPE) methods. The stochastic ruler method outperforms the hyperopt RS method and offers statistically comparable performance with respect to hyperopt's TPE method and the mango algorithm.

Tasks

Reproductions