SOTAVerified

Rotation Invariant Quantization for Model Compression

2023-03-03Code Available0· sign in to hype

Joseph Kampeas, Yury Nahshan, Hanoch Kremer, Gil Lederman, Shira Zaloshinski, Zheng Li, Emir Haleva

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Post-training Neural Network (NN) model compression is an attractive approach for deploying large, memory-consuming models on devices with limited memory resources. In this study, we investigate the rate-distortion tradeoff for NN model compression. First, we suggest a Rotation-Invariant Quantization (RIQ) technique that utilizes a single parameter to quantize the entire NN model, yielding a different rate at each layer, i.e., mixed-precision quantization. Then, we prove that our rotation-invariant approach is optimal in terms of compression. We rigorously evaluate RIQ and demonstrate its capabilities on various models and tasks. For example, RIQ facilitates 19.4 and 52.9 compression ratios on pre-trained VGG dense and pruned models, respectively, with <0.4\% accuracy degradation. Code is available in github.com/ehaleva/RIQ.

Tasks

Reproductions