SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 24512475 of 4925 papers

TitleStatusHype
Cactus Mechanisms: Optimal Differential Privacy Mechanisms in the Large-Composition Regime0
CALM: Co-evolution of Algorithms and Language Model for Automatic Heuristic Design0
CAMBI: Contrast-aware Multiscale Banding Index0
Cancer Subtyping via Embedded Unsupervised Learning on Transcriptomics Data0
Can General-Purpose Large Language Models Generalize to English-Thai Machine Translation ?0
Can Large Language Models Understand Context?0
Causal Speech Enhancement with Predicting Semantics based on Quantized Self-supervised Learning Features0
CBQ: Cross-Block Quantization for Large Language Models0
CDC: Classification Driven Compression for Bandwidth Efficient Edge-Cloud Collaborative Deep Learning0
CDQuant: Greedy Coordinate Descent for Accurate LLM Quantization0
CEG4N: Counter-Example Guided Neural Network Quantization Refinement0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
Cell growth rate dictates the onset of glass to fluid-like transition and long time super-diffusion in an evolving cell colony0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
Challenging GPU Dominance: When CPUs Outperform for On-Device LLM Inference0
Channel-Aware Constellation Design for Digital OTA Computation0
Channel Balancing for Accurate Quantization of Winograd Convolutions0
Channel Estimation for MIMO Hybrid Architectures with Low Resolution ADCs for mmWave Communication0
Channel Estimation in MIMO Systems with One-bit Spatial Sigma-delta ADCs0
Channel Pruning In Quantization-aware Training: An Adaptive Projection-gradient Descent-shrinkage-splitting Method0
Channel-wise Hessian Aware trace-Weighted Quantization of Neural Networks0
Channel-Wise Mixed-Precision Quantization for Large Language Models0
Characterising Bias in Compressed Models0
Characterization of the frequency response of channel-interleaved photonic ADCs based on the optical time-division demultiplexer0
Characterizing Coherent Integrated Photonic Neural Networks under Imperfections0
Show:102550
← PrevPage 99 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified