SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 36513700 of 4925 papers

TitleStatusHype
FEDZIP: A Compression Framework for Communication-Efficient Federated LearningCode0
Image Splicing Detection, Localization and Attribution via JPEG Primary Quantization Matrix Estimation and Clustering0
Probabilistic Learning Vector Quantization on Manifold of Symmetric Positive Definite Matrices0
Rescuing Deep Hashing from Dead Bits Problem0
Understanding Cache Boundness of ML Operators on ARM ProcessorsCode0
CAMBI: Contrast-aware Multiscale Banding Index0
Performance of Cell-Free MmWave Massive MIMO Systems with Fronthaul Compression and DAC Quantization0
AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence0
Pruning and Quantization for Deep Neural Network Acceleration: A Survey0
Error Diffusion Halftoning Against Adversarial ExamplesCode0
Continual Learning of Generative Models with Limited Data: From Wasserstein-1 Barycenter to Adaptive Coalescence0
Overfitting for Fun and Profit: Instance-Adaptive Data Compression0
Generative Zero-shot Network Quantization0
Time-Correlated Sparsification for Communication-Efficient Federated Learning0
ES-ENAS: Efficient Evolutionary Optimization for Large Hybrid Search SpacesCode0
Multi-Task Network Pruning and Embedded Optimization for Real-time Deployment in ADAS0
Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes0
KDLSQ-BERT: A Quantized Bert Combining Knowledge Distillation with Learned Step Size Quantization0
On the quantization of recurrent neural networks0
Towards Energy Efficient Federated Learning over 5G+ Mobile Devices0
Single-path Bit Sharing for Automatic Loss-aware Model Compression0
Energy-Efficient Distributed Learning Algorithms for Coarsely Quantized Signals0
Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks0
Sound Event Detection with Binary Neural Networks on Tightly Power-Constrained IoT Devices0
Computational data analysis for first quantization estimation on JPEG double compressed imagesCode0
Quantization optimized with respect to the Haar basis0
Who's a Good Boy? Reinforcing Canine Behavior in Real-Time using Machine LearningCode0
Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks0
End-to-end Quantized Training via Log-Barrier Extensions0
Product Quantizer Aware Inverted Index for Scalable Nearest Neighbor Search0
Practical Locally Private Federated Learning with Communication Efficiency0
Uniform-Precision Neural Network Quantization via Neural Channel Expansion0
Incremental few-shot learning via vector quantization in deep embedded space0
Improving Low-Precision Network Quantization via Bin Regularization0
TwinDNN: A Tale of Two Deep Neural Networks0
Explore the Potential of CNN Low Bit Training0
DQSGD: DYNAMIC QUANTIZED STOCHASTIC GRADIENT DESCENT FOR COMMUNICATION-EFFICIENT DISTRIBUTED LEARNING0
Simple Augmentation Goes a Long Way: ADRL for DNN Quantization0
Improving the accuracy of neural networks in analog computing-in-memory systems by a generalized quantization method0
Post-Training Weighted Quantization of Neural Networks for Language Models0
Collaborative Filtering with Smooth Reconstruction of the Preference Function0
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bitwise Regularization0
WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic0
WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATIONCode0
Hybrid and Non-Uniform DNN quantization methods using Retro Synthesis data for efficient inference0
Multi-Prize Lottery Ticket Hypothesis: Finding Generalizable and Efficient Binary Subnetworks in a Randomly Weighted Neural Network0
Weights Having Stable Signs Are Important: Finding Primary Subnetworks and Kernels to Compress Binary Weight Networks0
Learned Multi-Resolution Variable-Rate Image Compression with Octave-based Residual Blocks0
BinaryBERT: Pushing the Limit of BERT Quantization0
Improving Adversarial Robustness in Weight-quantized Neural Networks0
Show:102550
← PrevPage 74 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified