SOTAVerified

Pyramid Vector Quantization and Bit Level Sparsity in Weights for Efficient Neural Networks Inference

2019-11-24Unverified0· sign in to hype

Vincenzo Liguori

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper discusses three basic blocks for the inference of convolutional neural networks (CNNs). Pyramid Vector Quantization (PVQ) is discussed as an effective quantizer for CNNs weights resulting in highly sparse and compressible networks. Properties of PVQ are exploited for the elimination of multipliers during inference while maintaining high performance. The result is then extended to any other quantized weights. The Tiny Yolo v3 CNN is used to compare such basic blocks.

Tasks

Reproductions