SOTAVerified

ISyNet: Convolutional Neural Networks design for AI accelerator

2021-09-04Code Available0· sign in to hype

Alexey Letunovskiy, Vladimir Korviakov, Vladimir Polovnikov, Anastasiia Kargapoltseva, Ivan Mazurenko, Yepan Xiong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In recent years Deep Learning reached significant results in many practical problems, such as computer vision, natural language processing, speech recognition and many others. For many years the main goal of the research was to improve the quality of models, even if the complexity was impractically high. However, for the production solutions, which often require real-time work, the latency of the model plays a very important role. Current state-of-the-art architectures are found with neural architecture search (NAS) taking model complexity into account. However, designing of the search space suitable for specific hardware is still a challenging task. To address this problem we propose a measure of hardware efficiency of neural architecture search space - matrix efficiency measure (MEM); a search space comprising of hardware-efficient operations; a latency-aware scaling method; and ISyNet - a set of architectures designed to be fast on the specialized neural processing unit (NPU) hardware and accurate at the same time. We show the advantage of the designed architectures for the NPU devices on ImageNet and the generalization ability for the downstream classification and detection tasks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNetISyNet-N3Top-1 Error Rate19.84Unverified
ImageNetISyNet-N2Top-1 Error Rate20.59Unverified
ImageNetISyNet-N1-S3Top-1 Error Rate21.45Unverified
ImageNetISyNet-N1-S2Top-1 Error Rate22.15Unverified
ImageNetISyNet-N1-S1Top-1 Error Rate22.7Unverified
ImageNetISyNet-N1Top-1 Error Rate23.16Unverified
ImageNetISyNet-N0Top-1 Error Rate24.55Unverified

Reproductions