Colab NAS: Obtaining lightweight task-specific convolutional neural networks following Occam's razor
Andrea Mattia Garavagno, Daniele Leonardis, Antonio Frisoli
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/andreamattiagaravagno/colabnasOfficialIn papernone★ 3
Abstract
The current trend of applying transfer learning from convolutional neural networks (CNNs) trained on large datasets can be an overkill when the target application is a custom and delimited problem, with enough data to train a network from scratch. On the other hand, the training of custom and lighter CNNs requires expertise, in the from-scratch case, and or high-end resources, as in the case of hardware-aware neural architecture search (HW NAS), limiting access to the technology by non-habitual NN developers. For this reason, we present ColabNAS, an affordable HW NAS technique for producing lightweight task-specific CNNs. Its novel derivative-free search strategy, inspired by Occam's razor, allows to obtain state-of-the-art results on the Visual Wake Word dataset, a standard TinyML benchmark, in just 3.1 GPU hours using free online GPU services such as Google Colaboratory and Kaggle Kernel.