SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 601625 of 4925 papers

TitleStatusHype
Efficient Quantized Sparse Matrix Operations on Tensor CoresCode1
Exploring the Connection Between Binary and Spiking Neural NetworksCode1
Fast Lossless Neural Compression with Integer-Only Discrete FlowsCode1
BinaryHPE: 3D Human Pose and Shape Estimation via BinarizationCode1
BED: A Real-Time Object Detection System for Edge DevicesCode1
Fast and Low-Cost Genomic Foundation Models via Outlier RemovalCode1
GAN Slimming: All-in-One GAN Compression by A Unified Optimization FrameworkCode1
DVD-Quant: Data-free Video Diffusion Transformers QuantizationCode1
Fast Nearest Convolution for Real-Time Efficient Image Super-ResolutionCode1
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANNCode1
Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution NetworksCode1
Benchmarking Quantized Neural Networks on FPGAs with FINNCode1
Exploring Frequency-Inspired Optimization in Transformer for Efficient Single Image Super-ResolutionCode1
Feature Quantization Improves GAN TrainingCode1
DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and QuantizationCode1
Few shot font generation via transferring similarity guided global style and quantization local styleCode1
DQS3D: Densely-matched Quantization-aware Semi-supervised 3D DetectionCode1
FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix ApproximationCode1
Dynamic Network Quantization for Efficient Video InferenceCode1
Fine-grained Data Distribution Alignment for Post-Training QuantizationCode1
Finite Scalar Quantization: VQ-VAE Made SimpleCode1
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT CompressionCode1
BAFFLE: A Baseline of Backpropagation-Free Federated LearningCode1
Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical StudyCode1
Catastrophic Failure of LLM Unlearning via QuantizationCode1
Show:102550
← PrevPage 25 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified