SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 40014025 of 4925 papers

TitleStatusHype
Post-Training Piecewise Linear Quantization for Deep Neural NetworksCode1
Optimized Feature Space Learning for Generating Efficient Binary Codes for Image Retrieval0
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms0
Deep Learning-based Image Compression with Trellis Coded Quantization0
Communication Efficient Federated Learning over Multiple Access Channels0
Fast, Compact and Highly Scalable Visual Place Recognition through Sequence-based Matching of Overloaded RepresentationsCode1
Real-Time Object Detection and Recognition on Low-Compute Humanoid Robots using Deep Learning0
Adaptive Dithering Using Curved Markov-Gaussian Noise in the Quantized Domain for Mapping SDR to HDR Image0
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks0
A "Network Pruning Network" Approach to Deep Model Compression0
Asymmetric Correlation Quantization Hashing for Cross-modal Retrieval0
Hierarchical Modeling of Multidimensional Data in Regularly Decomposed Spaces: Synthesis and Perspective0
Deep Optimized Multiple Description Image Coding via Scalar Quantization LearningCode0
Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network ClassifiersCode0
Embedding Compression with Isotropic Iterative Quantization0
Gaussian Approximation of Quantization Error for Estimation from Compressed Data0
Least squares binary quantization of neural networksCode1
Resource-Efficient Neural Networks for Embedded Systems0
RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks0
Fractional Skipping: Towards Finer-Grained Dynamic CNN InferenceCode1
Attention based on-device streaming speech recognition with large speech corpus0
Don't Waste Your Bits! Squeeze Activations and Gradients for Deep Neural Networks via TinyScript0
Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers0
Acceleration for Compressed Gradient Descent in Distributed Optimization0
Towards Accurate Post-training Network Quantization via Bit-Split and StitchingCode1
Show:102550
← PrevPage 161 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified