SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 301350 of 4925 papers

TitleStatusHype
APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU Tensor CoresCode1
FP4 All the Way: Fully Quantized Training of LLMsCode1
FQ-ViT: Post-Training Quantization for Fully Quantized Vision TransformerCode1
Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient AdaptationCode1
Accordion: Adaptive Gradient Communication via Critical Learning Regime IdentificationCode1
Structured Multi-Track Accompaniment Arrangement via Style Prior ModellingCode1
ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of MultipliersCode1
FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning SimulationsCode1
FracBits: Mixed Precision Quantization via Fractional Bit-WidthsCode1
Generalized Product Quantization Network for Semi-supervised Image RetrievalCode1
Heatmap Regression via Randomized RoundingCode1
Fine-grained Data Distribution Alignment for Post-Training QuantizationCode1
Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in Reinforcement LearningCode1
Fine-tuning Quantized Neural Networks with Zeroth-order OptimizationCode1
FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix ApproximationCode1
FFNeRV: Flow-Guided Frame-Wise Neural Representations for VideosCode1
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural NetworksCode1
Finite Scalar Quantization: VQ-VAE Made SimpleCode1
FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated LearningCode1
Federated Optimization Algorithms with Random Reshuffling and Gradient CompressionCode1
Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint ReductionCode1
APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision TransformersCode1
Feature Quantization Improves GAN TrainingCode1
Few shot font generation via transferring similarity guided global style and quantization local styleCode1
Fixed-point Quantization of Convolutional Neural Networks for Quantized Inference on Embedded PlatformsCode1
FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware TransformationCode1
FastText.zip: Compressing text classification modelsCode1
Fast Nearest Convolution for Real-Time Efficient Image Super-ResolutionCode1
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANNCode1
Feature-based Federated Transfer Learning: Communication Efficiency, Robustness and PrivacyCode1
Fast Distance-based Anomaly Detection in Images Using an Inception-like AutoencoderCode1
Fast and Low-Cost Genomic Foundation Models via Outlier RemovalCode1
1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bitCode1
Fast, Compact and Highly Scalable Visual Place Recognition through Sequence-based Matching of Overloaded RepresentationsCode1
Fast Lossless Neural Compression with Integer-Only Discrete FlowsCode1
Exploring Frequency-Inspired Optimization in Transformer for Efficient Single Image Super-ResolutionCode1
Exploring Quantization for Efficient Pre-Training of Transformer Language ModelsCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Exploring the Connection Between Binary and Spiking Neural NetworksCode1
ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint ShrinkingCode1
Examining Post-Training Quantization for Mixture-of-Experts: A BenchmarkCode1
Exploiting LLM QuantizationCode1
Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and ToolboxCode1
Evaluation and Optimization of Gradient Compression for Distributed Deep LearningCode1
ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language GenerationCode1
2DQuant: Low-bit Post-Training Quantization for Image Super-ResolutionCode1
Error Diffusion: Post Training Quantization with Block-Scaled Number Formats for Neural NetworksCode1
EvoPress: Towards Optimal Dynamic Model Compression via Evolutionary SearchCode1
Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile DevicesCode1
Enhancing Generalization of Universal Adversarial Perturbation through Gradient AggregationCode1
Show:102550
← PrevPage 7 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified