SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 11511200 of 4925 papers

TitleStatusHype
MCRB for Parameter Estimation from One-Bit Quantized and Oversampled Measurements0
Make Some Noise: Towards LLM audio reasoning and generation using sound tokens0
Long-Tail Crisis in Nearest Neighbor Language Models0
MoQa: Rethinking MoE Quantization with Multi-stage Data-model Distribution Awareness0
A 71.2-μW Speech Recognition Accelerator with Recurrent Spiking Neural Network0
Q-MambaIR: Accurate Quantized Mamba for Efficient Image Restoration0
HOT: Hadamard-based Optimized TrainingCode0
MAR-3D: Progressive Masked Auto-regressor for High-Resolution 3D Generation0
SINR: Sparsity Driven Compressed Implicit Neural Representations0
QUAD: Quantization and Parameter-Efficient Tuning of LLM with Activation DecompositionCode0
Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization0
QSID-MPC: Model Predictive Control with System Identification from Quantized Data0
GranQ: Granular Zero-Shot Quantization with Channel-Wise Activation Scaling in QAT0
FFN Fusion: Rethinking Sequential Computation in Large Language Models0
4DGC: Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video0
Energy-Aware LLMs: A step towards sustainable AI for downstream applications0
Variance Control via Weight Rescaling in LLM Pre-trainingCode0
Improving Quantization with Post-Training Model Expansion0
SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs0
Learning Linear Block Codes with Gradient Quantization0
Neural Networks: According to the Principles of Grassmann Algebra0
Plug-and-Play 1.x-Bit KV Cache Quantization for Video Large Language Models0
Improving Autoregressive Image Generation through Coarse-to-Fine Token Prediction0
Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation0
LeanTTA: A Backpropagation-Free and Stateless Approach to Quantized Test-Time Adaptation on Edge Devices0
PARQ: Piecewise-Affine Regularized Quantization0
FP4DiT: Towards Effective Floating Point Quantization for Diffusion TransformersCode0
RAG-based User Profiling for Precision Planning in Mixed-precision Over-the-Air Federated Learning0
Natural Quantization of Neural NetworksCode0
Quantization-Free Autoregressive Action TransformerCode0
Robust Machine Unlearning for Quantized Neural Networks via Adaptive Gradient Reweighting with Similar Labels0
MAG: Multi-Modal Aligned Autoregressive Co-Speech Gesture Generation without Vector Quantization0
CompMarkGS: Robust Watermarking for Compressed 3D Gaussian Splatting0
ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning0
ML-SpecQD: Multi-Level Speculative Decoding with Quantized Drafts0
ACT360: An Efficient 360-Degree Action Detection and Summarization Framework for Mission-Critical Training and Debriefing0
Versatile Physics-based Character Control with Hybrid Latent Representation0
Pathology Image Compression with Pre-trained Autoencoders0
Stabilizing Quantization-Aware Training by Implicit-Regularization on Hessian Matrix0
Understanding Flatness in Generative Models: Its Role and Benefits0
Global synchronization of multi-agent systems with nonlinear interactions0
Dual Codebook VQ: Enhanced Image Reconstruction with Reduced Codebook Size0
OuroMamba: A Data-Free Quantization Framework for Vision Mamba Models0
Automated Tomato Maturity Estimation Using an Optimized Residual Model with Pruning and Quantization Techniques0
Quantization for OpenAI's Whisper Models: A Comparative AnalysisCode0
Sometimes Painful but Certainly Promising: Feasibility and Trade-offs of Language Model Inference at the Edge0
ViM-VQ: Efficient Post-Training Vector Quantization for Visual Mamba0
Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to Adversarial Attacks0
Accurate INT8 Training Through Dynamic Block-Level Fallback0
PRISM: Privacy-Preserving Improved Stochastic Masking for Federated Generative ModelsCode0
Show:102550
← PrevPage 24 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified