SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 44514475 of 4925 papers

TitleStatusHype
Adaptive Loss-aware Quantization for Multi-bit NetworksCode0
Self-Supervised Learning for Color Spike Camera ReconstructionCode0
Quantized Fisher Discriminant AnalysisCode0
OLALa: Online Learned Adaptive Lattice Codes for Heterogeneous Federated LearningCode0
Quantized Fourier and Polynomial Features for more Expressive Tensor Network ModelsCode0
Lipschitz Continuity Retained Binary Neural NetworkCode0
Linearly Converging Error Compensated SGDCode0
Self-supervised Pre-training of Text RecognizersCode0
Explaining Reject Options of Learning Vector Quantization ClassifiersCode0
Self-supervised Product Quantization for Deep Unsupervised Image RetrievalCode0
Deep Triplet QuantizationCode0
Deep Task-Based Analog-to-Digital ConversionCode0
Lightweight Deep Learning Based Channel Estimation for Extremely Large-Scale Massive MIMO SystemsCode0
Compositional Sketch SearchCode0
Two-Step Quantization for Low-Bit Neural NetworksCode0
Composite QuantizationCode0
On-Device Language Models: A Comprehensive ReviewCode0
Communication Efficient Private Federated Learning Using DitheringCode0
On-Device LLM for Context-Aware Wi-Fi RoamingCode0
Communication-Efficient Multi-Device Inference Acceleration for Transformer ModelsCode0
Lightweight Client-Side Chinese/Japanese Morphological Analyzer Based on Online LearningCode0
DeepShift: Towards Multiplication-Less Neural NetworksCode0
Algorithm-Hardware Co-Design of Distribution-Aware Logarithmic-Posit Encodings for Efficient DNN InferenceCode0
Bag of Tricks for Optimizing Transformer EfficiencyCode0
Towards Efficient Verification of Quantized Neural NetworksCode0
Show:102550
← PrevPage 179 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified