SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 24262450 of 4925 papers

TitleStatusHype
Breaking the Limits of Quantization-Aware Defenses: QADT-R for Robustness Against Patch-Based Adversarial Attacks in QNNs0
An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models0
Adaptive Dataset Quantization0
Efficient-Adam: Communication-Efficient Distributed Adam0
Efficient Adaptive Activation Rounding for Post-Training Quantization0
Breaking the Bias: Recalibrating the Attention of Industrial Anomaly Detection0
Efficiency Meets Fidelity: A Novel Quantization Framework for Stable Diffusion0
Effects of VLSI Circuit Constraints on Temporal-Coding Multilayer Spiking Neural Networks0
Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model0
An Inquiry into Datacenter TCO for LLM Inference with FP80
Effect of Weight Quantization on Learning Models by Typical Case Analysis0
Effect of Signal Quantization on Performance Measures of a 1st Order One Dimensional Differential Microphone Array0
BrainStratify: Coarse-to-Fine Disentanglement of Intracranial Neural Dynamics0
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations0
Brain-inspired reverse adversarial examples0
An Intra-BRNN and GB-RVQ Based END-TO-END Neural Audio Codec0
Abstractive summarization from Audio Transcription0
Effective Quantization for Diffusion Models on CPUs0
Effective Quantization Approaches for Recurrent Neural Networks0
Effective Interplay between Sparsity and Quantization: From Theory to Practice0
Brain Inspired Cortical Coding Method for Fast Clustering and Codebook Generation0
Effective and Fast: A Novel Sequential Single Path Search for Mixed-Precision Quantization0
Effective and Efficient Mixed Precision Quantization of Speech Foundation Models0
Boost Vision Transformer with GPU-Friendly Sparsity and Quantization0
An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis0
Show:102550
← PrevPage 98 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified