NAS-Bench-101: Towards Reproducible Neural Architecture Search
Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, Frank Hutter
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/automl/nas_benchmarksOfficialIn papernone★ 0
- github.com/google-research/nasbenchOfficialIn papertf★ 0
- github.com/eric-vader/HD-BO-Additive-Modelsnone★ 11
- github.com/DaveKim3872/nasbench-hpctf★ 0
Abstract
Recent advances in neural architecture search (NAS) demand tremendous computational resources, which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation. We aim to ameliorate these problems by introducing NAS-Bench-101, the first public architecture dataset for NAS research. To build NAS-Bench-101, we carefully constructed a compact, yet expressive, search space, exploiting graph isomorphisms to identify 423k unique convolutional architectures. We trained and evaluated all of these architectures multiple times on CIFAR-10 and compiled the results into a large dataset of over 5 million trained models. This allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset. We demonstrate its utility by analyzing the dataset as a whole and by benchmarking a range of architecture optimization algorithms.