SOTAVerified

Exploring the Potential of Low-bit Training of Convolutional Neural Networks

2020-06-04Unverified0· sign in to hype

Kai Zhong, Xuefei Ning, Guohao Dai, Zhenhua Zhu, Tianchen Zhao, Shulin Zeng, Yu Wang, Huazhong Yang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this work, we propose a low-bit training framework for convolutional neural networks, which is built around a novel multi-level scaling (MLS) tensor format. Our framework focuses on reducing the energy consumption of convolution operations by quantizing all the convolution operands to low bit-width format. Specifically, we propose the MLS tensor format, in which the element-wise bit-width can be largely reduced. Then, we describe the dynamic quantization and the low-bit tensor convolution arithmetic to leverage the MLS tensor format efficiently. Experiments show that our framework achieves a superior trade-off between the accuracy and the bit-width than previous low-bit training frameworks. For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within 1\%. And on larger datasets like ImageNet, using 4-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within 1\%. Through the energy consumption simulation of the computing units, we can estimate that training a variety of models with our framework could achieve 8.310.2 and 1.92.3 higher energy efficiency than training with full-precision and 8-bit floating-point arithmetic, respectively.

Tasks

Reproductions