SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 20762100 of 4925 papers

TitleStatusHype
Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique0
Extreme Image Compression using Fine-tuned VQGANs0
COMET: Towards Partical W4A4KV4 LLMs Serving0
Extreme Compression for Pre-trained Transformers Made Simple and Efficient0
Post Training Quantization of Large Language Models with Microscaling Formats0
Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising0
Exposing Hardware Building Blocks to Machine Learning Frameworks0
Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks0
A QP-adaptive Mechanism for CNN-based Filter in Video Coding0
AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence0
Exploring Semantic Segmentation on the DCT Representation0
Collaborative Quantization for Cross-Modal Similarity Search0
Collaborative Quantization Embeddings for Intra-Subject Prostate MR Image Registration0
Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis0
Exploring Model Invariance with Discrete Search for Ultra-Low-Bit Quantization0
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks0
APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models0
A Data and Compute Efficient Design for Limited-Resources Deep Learning0
Exploring FPGA designs for MX and beyond0
Exploring Extreme Quantization in Spiking Language Models0
Collaborative Filtering with Smooth Reconstruction of the Preference Function0
Exploring Automatic Gym Workouts Recognition Locally On Wearable Resource-Constrained Devices0
Collaborative Edge AI Inference over Cloud-RAN0
Explore the Potential of CNN Low Bit Training0
Explore Cross-Codec Quality-Rate Convex Hulls Relation for Adaptive Streaming0
Show:102550
← PrevPage 84 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified