SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 11761200 of 4925 papers

TitleStatusHype
Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization0
Don't Waste Your Bits! Squeeze Activations and Gradients for Deep Neural Networks via TinyScript0
DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models0
Downlink Clustering-Based Scheduling of IRS-Assisted Communications With Reconfiguration Constraints0
Communication-Efficient Decentralized Multi-Agent Reinforcement Learning for Cooperative Adaptive Cruise Control0
A Quantization-based Technique for Privacy Preserving Distributed Learning0
Communication Compression for Tensor Parallel LLM Inference0
A Quantitative Approach To The Temporal Dependency in Video Coding0
AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence0
3D Gaussian Splatting Data Compression with Mixture of Priors0
Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique0
COMET: Towards Partical W4A4KV4 LLMs Serving0
A QP-adaptive Mechanism for CNN-based Filter in Video Coding0
Post Training Quantization of Large Language Models with Microscaling Formats0
Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks0
Accelerating Neural Network Inference by Overflow Aware Quantization0
Collaborative Quantization for Cross-Modal Similarity Search0
A Data and Compute Efficient Design for Limited-Resources Deep Learning0
Collaborative Quantization Embeddings for Intra-Subject Prostate MR Image Registration0
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks0
APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models0
Collaborative Filtering with Smooth Reconstruction of the Preference Function0
Collaborative Edge AI Inference over Cloud-RAN0
AdaQAT: Adaptive Bit-Width Quantization-Aware Training0
Collaborative Automotive Radar Sensing via Mixed-Precision Distributed Array Completion0
Show:102550
← PrevPage 48 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified