SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 20012025 of 4925 papers

TitleStatusHype
Bandwidth-efficient Inference for Neural Image Compression0
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models0
RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems0
QuantEase: Optimization-based Quantization for Language Models0
On-Chip Hardware-Aware Quantization for Mixed Precision Neural Networks0
A survey on efficient vision transformers: algorithms, techniques, and performance benchmarking0
Compressing Vision Transformers for Low-Resource Visual LearningCode0
On the fly Deep Neural Network Optimization Control for Low-Power Computer Vision0
Softmax Bias Correction for Quantized Generative Models0
eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models0
Few shot font generation via transferring similarity guided global style and quantization local styleCode1
RepCodec: A Speech Representation Codec for Speech TokenizationCode1
SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language ModelsCode2
Learning Category Trees for ID-Based Recommendation: Exploring the Power of Differentiable Vector QuantizationCode0
FPTQ: Fine-grained Post-Training Quantization for Large Language Models0
Implementation and Evaluation of Physical Layer Key Generation on SDR based LoRa Platform0
Continual Learning for Generative Retrieval over Dynamic CorporaCode0
Uncovering the Hidden Cost of Model CompressionCode0
On-Device Learning with Binary Neural Networks0
Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message PropagationCode0
Maestro: Uncovering Low-Rank Structures via Trainable DecompositionCode0
MEMORY-VQ: Compression for Tractable Internet-Scale Memory0
VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and QuantizationCode1
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language ModelsCode2
Efficient Learned Lossless JPEG Recompression0
Show:102550
← PrevPage 81 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified