SOTAVerified

FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer

2021-11-27Code Available1· sign in to hype

Yang Lin, Tianyu Zhang, Peiqin Sun, Zheng Li, Shuchang Zhou

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Network quantization significantly reduces model inference complexity and has been widely used in real-world deployments. However, most existing quantization methods have been developed mainly on Convolutional Neural Networks (CNNs), and suffer severe degradation when applied to fully quantized vision transformers. In this work, we demonstrate that many of these difficulties arise because of serious inter-channel variation in LayerNorm inputs, and present, Power-of-Two Factor (PTF), a systematic method to reduce the performance degradation and inference complexity of fully quantized vision transformers. In addition, observing an extreme non-uniform distribution in attention maps, we propose Log-Int-Softmax (LIS) to sustain that and simplify inference by using 4-bit quantization and the BitShift operator. Comprehensive experiments on various transformer-based architectures and benchmarks show that our Fully Quantized Vision Transformer (FQ-ViT) outperforms previous works while even using lower bit-width on attention maps. For instance, we reach 84.89% top-1 accuracy with ViT-L on ImageNet and 50.8 mAP with Cascade Mask R-CNN (Swin-S) on COCO. To our knowledge, we are the first to achieve lossless accuracy degradation (~1%) on fully quantized vision transformers. The code is available at https://github.com/megvii-research/FQ-ViT.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNetFQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
ImageNetFQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
ImageNetFQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
ImageNetFQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
ImageNetFQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
ImageNetFQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
ImageNetFQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
ImageNetFQ-ViT (DeiT-T)Top-1 Accuracy (%)71.61Unverified

Reproductions