SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 551600 of 4925 papers

TitleStatusHype
DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution NetworksCode1
Data-Free Network Quantization With Adversarial Knowledge DistillationCode1
Data-Free Quantization Through Weight Equalization and Bias CorrectionCode1
D^2-DPM: Dual Denoising for Quantized Diffusion Probabilistic ModelsCode1
A Benchmark for Gaussian Splatting Compression and Quality Assessment StudyCode1
Keyword Spotting System and Evaluation of Pruning and Quantization Methods on Low-power Edge MicrocontrollersCode1
KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and QuantizationCode1
CrAM: A Compression-Aware MinimizerCode1
ABCD: Arbitrary Bitwise Coefficient for De-QuantizationCode1
CycleVAR: Repurposing Autoregressive Model for Unsupervised One-Step Image TranslationCode1
KeyPosS: Plug-and-Play Facial Landmark Detection through GPS-Inspired True-Range MultilaterationCode1
Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic EnhancementCode1
Analog Foundation ModelsCode1
CPLLM: Clinical Prediction with Large Language ModelsCode1
Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMsCode1
Joint Privacy Enhancement and Quantization in Federated LearningCode1
Convolutional Autoencoder-Based Phase Shift Feedback Compression for Intelligent Reflecting Surface-Assisted Wireless SystemsCode1
JointSQ: Joint Sparsification-Quantization for Distributed LearningCode1
ConveRT: Efficient and Accurate Conversational Representations from TransformersCode1
Jointly Optimizing Query Encoder and Product Quantization to Improve Retrieval PerformanceCode1
Jumping through Local Minima: Quantization in the Loss Landscape of Vision TransformersCode1
kANNolo: Sweet and Smooth Approximate k-Nearest Neighbors SearchCode1
Context-aware Communication for Multi-agent Reinforcement LearningCode1
Continual Learning via Bit-Level Information PreservingCode1
Confounding Tradeoffs for Neural Network QuantizationCode1
Designing Large Foundation Models for Efficient Training and Inference: A SurveyCode1
It's All In the Teacher: Zero-Shot Quantization Brought Closer to the TeacherCode1
Conditional Coding and Variable Bitrate for Practical Learned Video CodingCode1
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion ModelsCode1
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network QuantizationCode1
Bayesian Bits: Unifying Quantization and PruningCode1
Compression with Bayesian Implicit Neural RepresentationsCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
SimCC: a Simple Coordinate Classification Perspective for Human Pose EstimationCode1
Join the High Accuracy Club on ImageNet with A Binary Neural Network TicketCode1
Improving Neural Network Efficiency via Post-Training Quantization With Adaptive Floating-PointCode1
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer ProgrammingCode1
Compress Any Segment Anything Model (SAM)Code1
Compressing LLMs: The Truth is Rarely Pure and Never SimpleCode1
Improving Detail in Pluralistic Image Inpainting with Feature DequantizationCode1
A Memory Efficient Baseline for Open Domain Question AnsweringCode1
BAND-2k: Banding Artifact Noticeable Database for Banding Detection and Quality AssessmentCode1
Image Compression with Recurrent Neural Network and Generalized Divisive NormalizationCode1
Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal HashingCode1
COMQ: A Backpropagation-Free Algorithm for Post-Training QuantizationCode1
INT-FP-QSim: Mixed Precision and Formats For Large Language Models and Vision TransformersCode1
CondiQuant: Condition Number Based Low-Bit Quantization for Image Super-ResolutionCode1
BBS: Bi-directional Bit-level Sparsity for Deep Learning AccelerationCode1
Improvements to Target-Based 3D LiDAR to Camera CalibrationCode1
Inducing Systematicity in Transformers by Attending to Structurally Quantized EmbeddingsCode1
Show:102550
← PrevPage 12 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified