SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 48764900 of 4925 papers

TitleStatusHype
General Point Model Pretraining with Autoencoding and AutoregressiveCode0
Conditional Probability Models for Deep Image CompressionCode0
Applying generative neural networks for fast simulations of the ALICE (CERN) experimentCode0
Computational data analysis for first quantization estimation on JPEG double compressed imagesCode0
Compressing Word Embeddings via Deep Compositional Code LearningCode0
Generalized Relevance Learning Grassmann QuantizationCode0
Fate: Fast Edge Inference of Mixture-of-Experts Models via Cross-Layer GateCode0
Generalized Learning Vector Quantization for Classification in Randomized Neural Networks and Hyperdimensional ComputingCode0
Robustness Analysis of Deep Learning Frameworks on Mobile PlatformsCode0
GANQ: GPU-Adaptive Non-Uniform Quantization for Large Language ModelsCode0
FTT-NAS: Discovering Fault-Tolerant Convolutional Neural ArchitectureCode0
FPQVAR: Floating Point Quantization for Visual Autoregressive Model with FPGA Hardware Co-designCode0
FP4DiT: Towards Effective Floating Point Quantization for Diffusion TransformersCode0
Compressing Vision Transformers for Low-Resource Visual LearningCode0
Does quantization affect models' performance on long-context tasks?Code0
Dequantization and Color Transfer with Diffusion ModelsCode0
Accurate and Efficient Fine-Tuning of Quantized Large Language Models Through Optimal BalanceCode0
Foundations of Large Language Model Compression -- Part 1: Weight QuantizationCode0
Robustness of Generalized Learning Vector Quantization Models against Adversarial AttacksCode0
Q&C: When Quantization Meets Cache in Efficient Image GenerationCode0
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt TuningCode0
FLoCoRA: Federated learning compression with low-rank adaptationCode0
Floating-Point Quantization Analysis of Multi-Layer Perceptron Artificial Neural NetworksCode0
FlexRound: Learnable Rounding based on Element-wise Division for Post-Training QuantizationCode0
Flexible Mixed Precision Quantization for Learned Image CompressionCode0
Show:102550
← PrevPage 196 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified