SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 15261550 of 4925 papers

TitleStatusHype
Fast and Slow Gradient Approximation for Binary Neural Network OptimizationCode0
Federated Classification in Hyperbolic Spaces via Secure Aggregation of Convex HullsCode0
Quantization for OpenAI's Whisper Models: A Comparative AnalysisCode0
Flexible Mixed Precision Quantization for Learned Image CompressionCode0
FairGLVQ: Fairness in Partition-Based ClassificationCode0
Extracting Usable Predictions from Quantized Networks through Uncertainty Quantification for OOD DetectionCode0
FALCON: Feature-Label Constrained Graph Net Collapse for Memory Efficient GNNsCode0
Exploring Post-Training Quantization of Protein Language ModelsCode0
Audio Spectral Enhancement: Leveraging Autoencoders for Low Latency Reconstruction of Long, Lossy Audio SequencesCode0
Exploring Quantization and Mapping Synergy in Hardware-Aware Deep Neural Network AcceleratorsCode0
Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer InterfacesCode0
Fast Adjustable Threshold For Uniform Neural Network Quantization (Winning solution of LPIRC-II)Code0
Explaining Reject Options of Learning Vector Quantization ClassifiersCode0
Expansion Quantization Network: An Efficient Micro-emotion Annotation and Detection FrameworkCode0
ExpandNet: A Deep Convolutional Neural Network for High Dynamic Range Expansion from Low Dynamic Range ContentCode0
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware TrainingCode0
DAQ: Density-Aware Post-Training Weight-Only Quantization For LLMsCode0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
EXAQ: Exponent Aware Quantization For LLMs AccelerationCode0
Adaptive Prediction-Powered AutoEval with Reliability and Efficiency GuaranteesCode0
Exploiting vulnerabilities of deep neural networks for privacy protectionCode0
Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary SeminormsCode0
Evaluating Quantized Large Language Models for Code Generation on Low-Resource Language BenchmarksCode0
ACIQ: Analytical Clipping for Integer Quantization of neural networksCode0
Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspectiveCode0
Show:102550
← PrevPage 62 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified