SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 176200 of 4925 papers

TitleStatusHype
Gaussian Weight Sampling for Scalable, Efficient and Stable Pseudo-Quantization Training0
Addition is almost all you need: Compressing neural networks with double binary factorizationCode0
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Accurate KV Cache Quantization with Outlier Tokens TracingCode1
EA-3DGS: Efficient and Adaptive 3D Gaussians with Highly Enhanced Quality for outdoor scenesCode1
A probabilistic framework for dynamic quantization0
VQ-Logits: Compressing the Output Bottleneck of Large Language Models via Vector Quantized Logits0
TransPL: VQ-Code Transition Matrices for Pseudo-Labeling of Time Series Unsupervised Domain AdaptationCode0
Analog Foundation ModelsCode1
Zero-shot Quantization: A Comprehensive Survey0
Efficient Mixed Precision Quantization in Graph Neural NetworksCode0
Resource-Efficient Language Models: Quantization for Fast and Accessible Inference0
Multi-Layer Hierarchical Federated Learning with Quantization0
Efficient ANN-SNN Conversion with Error Compensation Learning0
Cognitive Non-Coherent Jamming Techniques for Frequency Selective Attacks0
An Extra RMSNorm is All You Need for Fine Tuning to 1.58 Bits0
QuantX: A Framework for Hardware-Aware Quantization of Generative AI Workloads0
Continuous Visual Autoregressive Generation via Score MaximizationCode1
Bang for the Buck: Vector Search on Cloud CPUs0
Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption0
Semantic Retention and Extreme Compression in LLMs: Can We Have Both?0
GuidedQuant: Large Language Model Quantization via Exploiting End Loss GuidanceCode2
Improving Block-Wise LLM Quantization by 4-bit Block-Wise Optimal Float (BOF4): Analysis and Variations0
Challenging GPU Dominance: When CPUs Outperform for On-Device LLM Inference0
LightNobel: Improving Sequence Length Limitation in Protein Structure Prediction Model via Adaptive Activation Quantization0
Show:102550
← PrevPage 8 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified