SOTAVerified

Searching for A Robust Neural Architecture in Four GPU Hours

2019-10-10CVPR 2019Code Available1· sign in to hype

Xuanyi Dong, Yi Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82\% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10GDAS (FRC)Top-1 Error Rate2.5Unverified
CIFAR-10GDASTop-1 Error Rate3.4Unverified
NAS-Bench-201, CIFAR-10GDASAccuracy (Test)93.61Unverified
NAS-Bench-201, CIFAR-100GDASAccuracy (Test)70.7Unverified
NAS-Bench-201, ImageNet-16-120GDASAccuracy (Test)41.71Unverified

Reproductions