SOTAVerified

Zero-Shot Dynamic Quantization for Transformer Inference

2022-11-17Code Available0· sign in to hype

Yousef El-Kurdi, Jerry Quinn, Avirup Sil

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers. Existing methods for quantizing models either modify the training procedure,or they require an additional calibration step to adjust parameters that also requires a selected held-out dataset. Our method permits taking advantage of quantization without the need for these adjustments. We present results on several NLP tasks demonstrating the usefulness of this technique.

Tasks

Reproductions