SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 39513975 of 4925 papers

TitleStatusHype
Communication-Efficient Federated Learning over Capacity-Limited Wireless Networks0
Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks0
Communication-efficient k-Means for Edge-based Machine Learning0
Communication Efficient SGD via Gradient Sampling With Bayes Prior0
Communication-Efficient Split Learning via Adaptive Feature-Wise Compression0
Communication-efficient Variance-reduced Stochastic Gradient Descent0
Compact and Robust Deep Learning Architecture for Fluorescence Lifetime Imaging and FPGA Implementation0
CompactifAI: Extreme Compression of Large Language Models using Quantum-Inspired Tensor Networks0
Compact Neural Graphics Primitives with Learned Hash Probing0
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms0
Compact Representation for Image Classification: To Choose or to Compress?0
Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking0
Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking0
Comparing Fisher Information Regularization with Distillation for DNN Quantization0
Comparing Iterative and Least-Squares Based Phase Noise Tracking in Receivers with 1-bit Quantization and Oversampling0
Comparison of 14 different families of classification algorithms on 115 binary datasets0
Compensate Quantization Errors: Make Weights Hierarchical to Compensate Each Other0
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners0
Completion Time Minimization of Fog-RAN-Assisted Federated Learning With Rate-Splitting Transmission0
CompMarkGS: Robust Watermarking for Compressed 3D Gaussian Splatting0
Component Training of Turbo Autoencoders0
Composite Code Sparse Autoencoders for first stage retrieval0
Composite Correlation Quantization for Efficient Multimodal Retrieval0
Comprehensive Study on Performance Evaluation and Optimization of Model Compression: Bridging Traditional Deep Learning and Large Language Models0
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
Show:102550
← PrevPage 159 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified