SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 326350 of 4925 papers

TitleStatusHype
FOX-NAS: Fast, On-device and Explainable Neural Architecture SearchCode1
Designing Large Foundation Models for Efficient Training and Inference: A SurveyCode1
Conditional Coding and Variable Bitrate for Practical Learned Video CodingCode1
FP4 All the Way: Fully Quantized Training of LLMsCode1
COMQ: A Backpropagation-Free Algorithm for Post-Training QuantizationCode1
Compression with Bayesian Implicit Neural RepresentationsCode1
1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bitCode1
CondiQuant: Condition Number Based Low-Bit Quantization for Image Super-ResolutionCode1
From Analog to Digital: Multi-Order Digital Joint Coding-Modulation for Semantic CommunicationCode1
Compressing LLMs: The Truth is Rarely Pure and Never SimpleCode1
Fine-grained Data Distribution Alignment for Post-Training QuantizationCode1
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural NetworksCode1
FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix ApproximationCode1
Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in Reinforcement LearningCode1
Fine-tuning Quantized Neural Networks with Zeroth-order OptimizationCode1
Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint ReductionCode1
Compress Any Segment Anything Model (SAM)Code1
Few shot font generation via transferring similarity guided global style and quantization local styleCode1
Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal HashingCode1
2DQuant: Low-bit Post-Training Quantization for Image Super-ResolutionCode1
FFNeRV: Flow-Guided Frame-Wise Neural Representations for VideosCode1
Finite Scalar Quantization: VQ-VAE Made SimpleCode1
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
Compact representations of convolutional neural networks via weight pruning and quantizationCode1
Feature Quantization Improves GAN TrainingCode1
Show:102550
← PrevPage 14 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified