SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 40014050 of 4925 papers

TitleStatusHype
Post-Training Piecewise Linear Quantization for Deep Neural NetworksCode1
Optimized Feature Space Learning for Generating Efficient Binary Codes for Image Retrieval0
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms0
Deep Learning-based Image Compression with Trellis Coded Quantization0
Communication Efficient Federated Learning over Multiple Access Channels0
Fast, Compact and Highly Scalable Visual Place Recognition through Sequence-based Matching of Overloaded RepresentationsCode1
Real-Time Object Detection and Recognition on Low-Compute Humanoid Robots using Deep Learning0
Adaptive Dithering Using Curved Markov-Gaussian Noise in the Quantized Domain for Mapping SDR to HDR Image0
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks0
A "Network Pruning Network" Approach to Deep Model Compression0
Asymmetric Correlation Quantization Hashing for Cross-modal Retrieval0
Hierarchical Modeling of Multidimensional Data in Regularly Decomposed Spaces: Synthesis and Perspective0
Deep Optimized Multiple Description Image Coding via Scalar Quantization LearningCode0
Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network ClassifiersCode0
Embedding Compression with Isotropic Iterative Quantization0
Gaussian Approximation of Quantization Error for Estimation from Compressed Data0
Least squares binary quantization of neural networksCode1
Resource-Efficient Neural Networks for Embedded Systems0
RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks0
Fractional Skipping: Towards Finer-Grained Dynamic CNN InferenceCode1
Attention based on-device streaming speech recognition with large speech corpus0
Don't Waste Your Bits! Squeeze Activations and Gradients for Deep Neural Networks via TinyScript0
Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers0
Acceleration for Compressed Gradient Descent in Distributed Optimization0
Towards Accurate Post-training Network Quantization via Bit-Split and StitchingCode1
Differentiable Product Quantization for Learning Compact Embedding Layers0
ZeroQ: A Novel Zero Shot Quantization FrameworkCode1
Efficient Systolic Array Based on Decomposable MAC for Quantized Deep Neural Networks0
New Loss Functions for Fast Maximum Inner Product Search0
Towards Unified INT8 Training for Convolutional Neural Network0
Mixed-Precision Quantized Neural Network with Progressively Decreasing Bitwidth For Image Classification and Object Detection0
Towards Efficient Training for Neural Network QuantizationCode1
EAST: Encoding-Aware Sparse Training for Deep Memory Compression of ConvNetsCode0
AdaBits: Neural Network Quantization with Adaptive Bit-WidthsCode0
FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference0
Adaptive Loss-aware Quantization for Multi-bit NetworksCode0
Neural Networks Weights Quantization: Target None-retraining Ternary (TNT)0
Interleaved Composite Quantization for High-Dimensional Similarity Search0
Efficient Error-Tolerant Quantized Neural Network Accelerators0
Attention network forecasts time-to-failure in laboratory shear experiments0
Learned Variable-Rate Image Compression with Residual Divisive Normalization0
Maximum Average Entropy-Based Quantization of Local Observations for Distributed Detection0
Compressing 3DCNNs Based on Tensor Train Decomposition0
Tensor Recovery from Noisy and Multi-Level Quantized Measurements0
Deep Model Compression Via Two-Stage Deep Reinforcement Learning0
RTN: Reparameterized Ternary Network0
Optimizing the energy consumption of spiking neural networks for neuromorphic applicationsCode0
EDAS: Efficient and Differentiable Architecture Search0
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local ComputationsCode0
Post training 4-bit quantization of convolutional networks for rapid-deploymentCode0
Show:102550
← PrevPage 81 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified