SOTAVerified

Traditional and accelerated gradient descent for neural architecture search

2020-06-26Code Available0· sign in to hype

Nicolas Garcia Trillos, Felix Morales, Javier Morales

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper we introduce two algorithms for neural architecture search (NASGD and NASAGD) following the theoretical work by two of the authors [5] which used the geometric structure of optimal transport to introduce the conceptual basis for new notions of traditional and accelerated gradient descent algorithms for the optimization of a function on a semi-discrete space. Our algorithms, which use the network morphism framework introduced in [2] as a baseline, can analyze forty times as many architectures as the hill climbing methods [2, 14] while using the same computational resources and time and achieving comparable levels of accuracy. For example, using NASGD on CIFAR-10, our method designs and trains networks with an error rate of 4.06 in only 12 hours on a single GPU.

Tasks

Reproductions