SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 27762800 of 4925 papers

TitleStatusHype
Detecting Face Synthesis Using a Concealed Fusion Model0
DHNet: Double MPEG-4 Compression Detection via Multiple DCT Histograms0
Detection of small changes in medical and random-dot images comparing self-organizing map performance to human detection0
Development of a Thermodynamics of Human Cognition and Human Culture0
Development of Quantized DNN Library for Exact Hardware Emulation0
Device Interoperability for Learned Image Compression with Weights and Activations Quantization0
DFTerNet: Towards 2-bit Dynamic Fusion Networks for Accurate Human Activity Recognition0
Diagnostic data integration using deep neural networks for real-time plasma analysis0
Differentiable Discrete Device-to-System Codesign for Optical Neural Networks via Gumbel-Softmax0
Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution0
Differentiable Joint Pruning and Quantization for Hardware Efficiency0
Differentiable Product Quantization for Learning Compact Embedding Layers0
Differentiable Search for Finding Optimal Quantization Strategy0
Differentiable Training for Hardware Efficient LightNNs0
Differential Deep Detection in Massive MIMO With One-Bit ADC0
Differential error feedback for communication-efficient decentralized learning0
Differential Modulation in Massive MIMO With Low-Resolution ADCs0
Differential Privacy with Random Projections and Sign Random Projections0
Diffusion-based Perceptual Neural Video Compression with Temporal Diffusion Information Reuse0
Diffusion Product Quantization0
DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation0
DILEMMA: Joint LLM Quantization and Distributed LLM Inference Over Edge Computing Systems0
Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes0
Dimension-Free Bounds for Low-Precision Training0
DipSVD: Dual-importance Protected SVD for Efficient LLM Compression0
Show:102550
← PrevPage 112 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified