SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 30263050 of 4925 papers

TitleStatusHype
Channel Balancing for Accurate Quantization of Winograd Convolutions0
Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code MatchingCode0
Instance-Aware Dynamic Neural Network QuantizationCode0
Data-Free Network Compression via Parametric Non-Uniform Mixed Precision Quantization0
Learnable Lookup Table for Neural Network QuantizationCode1
Mutual Quantization for Cross-Modal Search With Noisy Labels0
AlignQ: Alignment Quantization With ADMM-Based Correlation PreservationCode1
Mr.BiQ: Post-Training Non-Uniform Quantization Based on Minimizing the Reconstruction Error0
RecDis-SNN: Rectifying Membrane Potential Distribution for Directly Training Spiking Neural Networks0
SceneSqueezer: Learning To Compress Scene for Camera Relocalization0
ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language GenerationCode1
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in Edge-Cloud Systems0
Studying the Interplay between Information Loss and Operation Loss in Representations for Classification0
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural NetworksCode1
Automatic Mixed-Precision Quantization Search of BERT0
End-to-End Autoencoder Communications with Optimized Interference Suppression0
HiKonv: High Throughput Quantized Convolution With Novel Bit-wise Management and Computation0
Speedup deep learning models on GPU by taking advantage of efficient unstructured pruning and bit-width reduction0
Learning Cross-Scale Weighted Prediction for Efficient Neural Video CompressionCode1
BMPQ: Bit-Gradient Sensitivity Driven Mixed-Precision Quantization of DNNs from Scratch0
Stochastic Learning Equation using Monotone Increasing Resolution of Quantization0
Training Quantized Deep Neural Networks via Cooperative CoevolutionCode1
Distilling the Knowledge of Romanian BERTs Using Multiple TeachersCode0
Manifold learning via quantum dynamics0
Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats0
Show:102550
← PrevPage 122 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified