SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 42514275 of 4925 papers

TitleStatusHype
A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-OffCode0
SHE: A Fast and Accurate Deep Neural Network for Encrypted DataCode0
SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization0
Fully Quantized Network for Object Detection0
Learning Channel-Wise Interactions for Binary Convolutional Neural Networks0
Compressing Unknown Images With Product Quantizer for Efficient Zero-Shot Classification0
Deep Metric Learning to RankCode0
Enhanced Bayesian Compression via Deep Reinforcement Learning0
Increasing Compactness Of Deep Learning Based Speech Enhancement Models With Parameter Pruning And Quantization Techniques0
Deep Learning for Distributed Optimization: Applications to Wireless Resource Management0
DeepShift: Towards Multiplication-Less Neural NetworksCode0
Quantization Loss Re-Learning Method0
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On MicrocontrollersCode0
Mixed Precision Training With 8-bit Floating Point0
Instant Quantization of Neural Networks using Monte Carlo Methods0
A Reconfigurable Dual-Mode Tracking SAR ADC without Analog Subtraction0
Texture CNN for Thermoelectric Metal Pipe Image Classification0
Brain-inspired reverse adversarial examples0
Mixed Precision DNNs: All you need is a good parametrizationCode1
Learning In Practice: Reasoning About Quantization0
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent0
Quantization-Based Regularization for AutoencodersCode0
Natural Compression for Distributed Deep Learning0
Communication-Efficient Distributed Blockwise Momentum SGD with Error-FeedbackCode0
HadaNets: Flexible Quantization Strategies for Neural Networks0
Show:102550
← PrevPage 171 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified