SOTAVerified

Fast Optimizer Benchmark

2024-06-26Code Available1· sign in to hype

Simon Blauth, Tobias Bürger, Zacharias Häringer, Jörg Franke, Frank Hutter

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we present the Fast Optimizer Benchmark (FOB), a tool designed for evaluating deep learning optimizers during their development. The benchmark supports tasks from multiple domains such as computer vision, natural language processing, and graph learning. The focus is on convenient usage, featuring human-readable YAML configurations, SLURM integration, and plotting utilities. FOB can be used together with existing hyperparameter optimization (HPO) tools as it handles training and resuming of runs. The modular design enables integration into custom pipelines, using it simply as a collection of tasks. We showcase an optimizer comparison as a usage example of our tool. FOB can be found on GitHub: https://github.com/automl/FOB.

Tasks

Reproductions