SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 13511375 of 4925 papers

TitleStatusHype
Athena: Efficient Block-Wise Post-Training Quantization for Large Language Models Using Second-Order Matrix Derivative Information0
BiSup: Bidirectional Quantization Error Suppression for Large Language Models0
OAC: Output-adaptive Calibration for Accurate Post-training Quantization0
A rescaling-invariant Lipschitz bound based on path-metrics for modern ReLU network parameterizations0
SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language ModelsCode2
ASI++: Towards Distributionally Balanced End-to-End Generative Retrieval0
ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token IdentificationCode1
Rate-Adaptive Quantization: A Multi-Rate Codebook Adaptation for Vector Quantization-based Generative ModelsCode1
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM CompressionCode5
TerDiT: Ternary Diffusion Models with TransformersCode2
MultiCast: Zero-Shot Multivariate Time Series Forecasting Using LLMs0
LG-VQ: Language-Guided Codebook Learning0
Embedding Compression for Efficient Re-Identification0
Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising0
Distilling Vision-Language Pretraining for Efficient Cross-Modal Retrieval0
Integer Scale: A Free Lunch for Faster Fine-grained Quantization of LLMs0
MiniCache: KV Cache Compression in Depth Dimension for Large Language Models0
Mitigating Quantization Errors Due to Activation Spikes in GLU-Based LLMsCode0
AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs0
QGait: Toward Accurate Quantization for Gait Recognition with Binarized Input0
Adaptive Wireless Image Semantic Transmission and Over-The-Air Testing0
Communication-Efficient Federated Learning via Clipped Uniform QuantizationCode0
Discrete Cosine Transform Based Decorrelated Attention for Vision Transformers0
eXmY: A Data Type and Technique for Arbitrary Bit Precision Quantization0
Two Heads are Better Than One: Neural Networks Quantization with 2D Hilbert Curve-based Output Representation0
Show:102550
← PrevPage 55 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified