SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 351400 of 4925 papers

TitleStatusHype
ARB-LLM: Alternating Refined Binarizations for Large Language ModelsCode1
Lightweight Diffusion Models for Resource-Constrained Semantic CommunicationCode1
Locret: Enhancing Eviction in Long-Context LLM Inference with Trained Retaining Heads on Consumer-Grade DevicesCode1
Search for Efficient Large Language ModelsCode1
BitQ: Tailoring Block Floating Point Precision for Improved DNN Efficiency on Resource-Constrained DevicesCode1
MICSim: A Modular Simulator for Mixed-signal Compute-in-Memory based AI AcceleratorCode1
DiTAS: Quantizing Diffusion Transformers via Enhanced Activation SmoothingCode1
BBS: Bi-directional Bit-level Sparsity for Deep Learning AccelerationCode1
Designing Large Foundation Models for Efficient Training and Inference: A SurveyCode1
VQ-Flow: Taming Normalizing Flows for Multi-Class Anomaly Detection via Hierarchical Vector QuantizationCode1
Hyper-Compression: Model Compression via HyperfunctionCode1
1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bitCode1
Quantization-aware Matrix Factorization for Low Bit Rate Image CompressionCode1
Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient AdaptationCode1
EC-Guide: A Comprehensive E-Commerce Guide for Instruction Tuning and QuantizationCode1
Pruning Large Language Models with Semi-Structural Adaptive Sparse TrainingCode1
Mixed-precision Neural Networks on RISC-V Cores: ISA extensions for Multi-Pumped Soft SIMD OperationsCode1
A Benchmark for Gaussian Splatting Compression and Quality Assessment StudyCode1
AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm QuantizerCode1
Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Large ModelsCode1
Exploring Quantization for Efficient Pre-Training of Transformer Language ModelsCode1
PSC: Posterior Sampling-Based CompressionCode1
On Exact Bit-level Reversible Transformers Without Changing ArchitecturesCode1
RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation QuantizationCode1
Dataset Quantization with Active Learning based Adaptive SamplingCode1
OvSW: Overcoming Silent Weights for Accurate Binary Neural NetworksCode1
CLAMP-ViT: Contrastive Data-Free Learning for Adaptive Post-Training Quantization of ViTsCode1
SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based SpikingCode1
QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid DevicesCode1
LLMEasyQuant: Scalable Quantization for Parallel and Distributed LLM InferenceCode1
ViT-1.58b: Mobile Vision Transformers in the 1-bit EraCode1
Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer MergingCode1
ShadowLLM: Predictor-based Contextual Sparsity for Large Language ModelsCode1
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language ModelsCode1
QTIP: Quantization with Trellises and Incoherence ProcessingCode1
ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint ShrinkingCode1
Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and ToolboxCode1
Examining Post-Training Quantization for Mixture-of-Experts: A BenchmarkCode1
2DQuant: Low-bit Post-Training Quantization for Image Super-ResolutionCode1
From Analog to Digital: Multi-Order Digital Joint Coding-Modulation for Semantic CommunicationCode1
QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero OverheadCode1
Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in Reinforcement LearningCode1
SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretrainingCode1
ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video GenerationCode1
CE-VAE: Capsule Enhanced Variational AutoEncoder for Underwater Image EnhancementCode1
MagR: Weight Magnitude Reduction for Enhancing Post-Training QuantizationCode1
P^2-ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision TransformerCode1
4-bit Shampoo for Memory-Efficient Network TrainingCode1
Exploiting LLM QuantizationCode1
SLMRec: Distilling Large Language Models into Small for Sequential RecommendationCode1
Show:102550
← PrevPage 8 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified