SOTAVerified

Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators

2024-01-25Unverified0· sign in to hype

Yaniv Blumenfeld, Itay Hubara, Daniel Soudry

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The majority of the research on the quantization of Deep Neural Networks (DNNs) is focused on reducing the precision of tensors visible by high-level frameworks (e.g., weights, activations, and gradients). However, current hardware still relies on high-accuracy core operations. Most significant is the operation of accumulating products. This high-precision accumulation operation is gradually becoming the main computational bottleneck. This is because, so far, the usage of low-precision accumulators led to a significant degradation in performance. In this work, we present a simple method to train and fine-tune high-end DNNs, to allow, for the first time, utilization of cheaper, 12-bits accumulators, with no significant degradation in accuracy. Lastly, we show that as we decrease the accumulation precision further, using fine-grained gradient approximations can improve the DNN accuracy.

Tasks

Reproductions