SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 27512775 of 4925 papers

TitleStatusHype
Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection0
Self-Adaptable Templates for Feature Coding0
Self-calibration for Language Model Quantization and Pruning0
Self-control: A Better Conditional Mechanism for Masked Autoregressive Model0
Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models0
Self-Supervised Consistent Quantization for Fully Unsupervised Image Retrieval0
Self-triggered Consensus of Multi-agent Systems with Quantized Relative State Measurements0
Semantic and Effective Communication for Remote Control Tasks with Dynamic Feature Compression0
Semantic Certainty Assessment in Vector Retrieval Systems: A Novel Framework for Embedding Quality Evaluation0
Semantic Residual for Multimodal Unified Discrete Representation0
Semantic Retention and Extreme Compression in LLMs: Can We Have Both?0
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers0
Semantic Text Compression for Classification0
Semi-Blind Post-Equalizer SINR Estimation and Dual CSI Feedback for Radar-Cellular Coexistence0
SEMINAR: Search Enhanced Multi-modal Interest Network and Approximate Retrieval for Lifelong Sequential Recommendation0
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization0
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bitwise Regularization0
Semi-supervised Vector-Quantization in Visual SLAM using HGCN0
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware0
Sensitivity-Aware Mixed-Precision Quantization and Width Optimization of Deep Neural Networks Through Cluster-Based Tree-Structured Parzen Estimation0
SensorChat: Answering Qualitative and Quantitative Questions during Long-Term Multimodal Sensor Interactions0
Sensor Selection and Distributed Quantization for Energy Efficiency in Massive MTC0
SEP-Nets: Small and Effective Pattern Networks0
SeRP: Self-Supervised Representation Learning Using Perturbed Point Clouds0
Service Delay Minimization for Federated Learning over Mobile Devices0
Show:102550
← PrevPage 111 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified