SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 751775 of 4925 papers

TitleStatusHype
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through EstimationCode1
Not All Image Regions Matter: Masked Vector Quantization for Autoregressive Image GenerationCode1
MLP Fusion: Towards Efficient Fine-tuning of Dense and Mixture-of-Experts Language ModelsCode1
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language ModelsCode1
Distributed Learning Systems with First-order MethodsCode1
Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense EmbeddingsCode1
Distribution-Flexible Subset Quantization for Post-Quantizing Super-Resolution NetworksCode1
Once Quantization-Aware Training: High Performance Extremely Low-bit Architecture SearchCode1
One Loss for Quantization: Deep Hashing with Discrete Wasserstein Distributional MatchingCode1
On Exact Bit-level Reversible Transformers Without Changing ArchitecturesCode1
On the Universal Transformation of Data-Driven Models to Control SystemsCode1
Disentanglement via Latent QuantizationCode1
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and DistillationCode1
DiTAS: Quantizing Diffusion Transformers via Enhanced Activation SmoothingCode1
Differentiable JPEG: The Devil is in the DetailsCode1
Channel Estimation for Quantized Systems based on Conditionally Gaussian Latent ModelsCode1
Algorithm-hardware Co-design for Deformable ConvolutionCode1
Differentiable Model Compression via Pseudo Quantization NoiseCode1
DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined RotationCode1
Device-Robust Acoustic Scene Classification Based on Two-Stage Categorization and Data AugmentationCode1
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language ModelsCode1
P^2-ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision TransformerCode1
PalQuant: Accelerating High-precision Networks on Low-precision AcceleratorsCode1
DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion ModelsCode1
Mixed Precision DNNs: All you need is a good parametrizationCode1
Show:102550
← PrevPage 31 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified