SOTAVerified

Loss-aware Weight Quantization of Deep Networks

2018-02-23ICLR 2018Code Available0· sign in to hype

Lu Hou, James T. Kwok

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.

Tasks

Reproductions