SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 20262050 of 4925 papers

TitleStatusHype
A2Q: Accumulator-Aware Quantization with Guaranteed Overflow AvoidanceCode0
Quantized distributed Nash equilibrium seeking under DoS attacks0
Hybrid noise shaping for audio coding using perfectly overlapped window0
Robust open-set classification for encrypted traffic fingerprintingCode0
Consistent Signal Reconstruction from Streaming Multivariate Time Series0
Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition0
Distributed Energy Resource Management: All-Time Resource-Demand Feasibility, Delay-Tolerance, Nonlinearity, and Beyond0
Towards Clip-Free Quantized Super-Resolution Networks: How to Tame Representative Images0
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Jumping through Local Minima: Quantization in the Loss Landscape of Vision TransformersCode1
QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D Object Detection0
Dataset QuantizationCode2
Sampling From Autoencoders' Latent Space via Quantization And Probability Mass Function Concepts0
Quantization-based Optimization with Perspective of Quantum Mechanics0
Analyzing Quantization in TVM0
FunQuant: A R package to perform quantization in the context of rare events and time-consuming simulations0
NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual LearningCode1
SHARK: A Lightweight Model Compression Approach for Large-scale Recommender Systems0
ResQ: Residual Quantization for Video Perception0
JPEG Quantized Coefficient Recovery via DCT Domain Spatial-Frequential Transformer0
FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs0
Precision and Recall Reject Curves for Classification0
Characteristics of networks generated by kernel growing neural gasCode0
Gradient-Based Post-Training Quantization: Challenging the Status Quo0
A Survey on Model Compression for Large Language Models0
Show:102550
← PrevPage 82 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified