EPNAS: Efficient Progressive Neural Architecture Search
Yanqi Zhou, Peng Wang, Sercan Arik, Haonan Yu, Syed Zawad, Feng Yan, Greg Diamos
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In this paper, we propose Efficient Progressive Neural Architecture Search (EPNAS), a neural architecture search (NAS) that efficiently handles large search space through a novel progressive search policy with performance prediction based on REINFORCE~Williams.1992.PG. EPNAS is designed to search target networks in parallel, which is more scalable on parallel systems such as GPU/TPU clusters. More importantly, EPNAS can be generalized to architecture search with multiple resource constraints, , model size, compute complexity or intensity, which is crucial for deployment in widespread platforms such as mobile and cloud. We compare EPNAS against other state-of-the-art (SoTA) network architectures ( , MobileNetV2~mobilenetv2) and efficient NAS algorithms ( , ENAS~pham2018efficient, and PNAS~Liu2017b) on image recognition tasks using CIFAR10 and ImageNet. On both datasets, EPNAS is superior architecture searching speed and recognition accuracy.