SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 42514275 of 4925 papers

TitleStatusHype
Distributed Constraint-Coupled Optimization over Lossy Networks0
Distributed Convolutional Neural Network Training on Mobile and Edge Clusters0
Distributed CPU Scheduling Subject to Nonlinear Constraints0
Distributed Deep Convolutional Compression for Massive MIMO CSI Feedback0
Distributed Deep Reinforcement Learning Based Gradient Quantization for Federated Learning Enabled Vehicle Edge Computing0
Distributed Delay-Tolerant Strategies for Equality-Constraint Sum-Preserving Resource Allocation0
Distributed Energy Resource Management: All-Time Resource-Demand Feasibility, Delay-Tolerance, Nonlinearity, and Beyond0
Distributed Learning with Compressed Gradient Differences0
Distributed Learning with Sublinear Communication0
Distributed Mean Estimation with Limited Communication0
New Bounds For Distributed Mean Estimation and Variance Reduction0
Distributed Optimization for Quadratic Cost Functions over Large-Scale Networks with Quantized Communication and Finite-Time Convergence0
Distributed Optimization via Gradient Descent with Event-Triggered Zooming over Quantized Communication0
Distributed Optimization with Efficient Communication, Event-Triggered Solution Enhancement, and Operation Stopping0
Distributed Optimization with Finite Bit Adaptive Quantization for Efficient Communication and Precision Enhancement0
Distribution Adaptive INT8 Quantization for Training CNNs0
Distribution-Aware Adaptive Multi-Bit Quantization0
Distribution-Preserving k-Anonymity0
Distribution-sensitive Information Retention for Accurate Binary Neural Network0
Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training0
Ditto: Accelerating Diffusion Model via Temporal Value Similarity0
Divergent Token Metrics: Measuring degradation to prune away LLM components -- and optimize quantization0
DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment0
Diversifying Sample Generation for Accurate Data-Free Quantization0
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks0
Show:102550
← PrevPage 171 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified