SOTAVerified

Fixed-point optimization of deep neural networks with adaptive step size retraining

2017-02-27Unverified0· sign in to hype

Sungho Shin, Yoonho Boo, Wonyong Sung

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Fixed-point optimization of deep neural networks plays an important role in hardware based design and low-power implementations. Many deep neural networks show fairly good performance even with 2- or 3-bit precision when quantized weights are fine-tuned by retraining. We propose an improved fixedpoint optimization algorithm that estimates the quantization step size dynamically during the retraining. In addition, a gradual quantization scheme is also tested, which sequentially applies fixed-point optimizations from high- to low-precision. The experiments are conducted for feed-forward deep neural networks (FFDNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs).

Tasks

Reproductions