SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 851900 of 4925 papers

TitleStatusHype
NICE: Noise Injection and Clamping Estimation for Neural Network QuantizationCode1
Continual Learning via Bit-Level Information PreservingCode1
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network QuantizationCode1
Continuous Visual Autoregressive Generation via Score MaximizationCode1
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted NetworkCode1
MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-DesignCode1
Designing Large Foundation Models for Efficient Training and Inference: A SurveyCode1
Confounding Tradeoffs for Neural Network QuantizationCode1
ARB-LLM: Alternating Refined Binarizations for Large Language ModelsCode1
LCS: Learning Compressible Subspaces for Adaptive Network Compression at Inference TimeCode1
Arch-Net: Model Distillation for Architecture Agnostic Model DeploymentCode1
MPQ-DM: Mixed Precision Quantization for Extremely Low Bit Diffusion ModelsCode1
Context-aware Communication for Multi-agent Reinforcement LearningCode1
MQBench: Towards Reproducible and Deployable Model Quantization BenchmarkCode1
N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing CoresCode1
Graph Convolutional Network for Recommendation with Low-pass Collaborative FiltersCode1
COMQ: A Backpropagation-Free Algorithm for Post-Training QuantizationCode1
Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense RetrievalCode1
Learning Cross-Scale Weighted Prediction for Efficient Neural Video CompressionCode1
Compression with Bayesian Implicit Neural RepresentationsCode1
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMsCode1
Learning Graph Quantized TokenizersCode1
A holistic approach to polyphonic music transcription with neural networksCode1
A Refined Analysis of Massive Activations in LLMsCode1
Learning to Structure an Image with Few ColorsCode1
Learning Statistical Texture for Semantic SegmentationCode1
Learning to Groove with Inverse Sequence TransformationsCode1
Learning to Improve Image Compression without Changing the Standard DecoderCode1
CondiQuant: Condition Number Based Low-Bit Quantization for Image Super-ResolutionCode1
L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and Accurate Deep LearningCode1
Compressing LLMs: The Truth is Rarely Pure and Never SimpleCode1
Lexico: Extreme KV Cache Compression via Sparse Coding over Universal DictionariesCode1
Conditional Coding and Variable Bitrate for Practical Learned Video CodingCode1
ConveRT: Efficient and Accurate Conversational Representations from TransformersCode1
NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual LearningCode1
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric QuantizerCode1
NIPQ: Noise proxy-based Integrated Pseudo-QuantizationCode1
Transferable Sparse Adversarial AttackCode1
Lightweight Super-Resolution Head for Human Pose EstimationCode1
Self-Adapting Large Visual-Language Models to Edge Devices across Visual ModalitiesCode1
Model-Aware Deep Architectures for One-Bit Compressive Variational AutoencodingCode0
Model Compression Techniques in Biometrics Applications: A SurveyCode0
Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor DecompositionCode0
A Tale of Two Models: Constructing Evasive Attacks on Edge ModelsCode0
Mixed-Precision Quantization for Deep Vision Models with Integer Quadratic ProgrammingCode0
Model compression via distillation and quantizationCode0
Mixed Non-linear Quantization for Vision TransformersCode0
Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the EdgeCode0
Mitigating Quantization Errors Due to Activation Spikes in GLU-Based LLMsCode0
Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation RegularizationCode0
Show:102550
← PrevPage 18 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified