SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 11011150 of 4925 papers

TitleStatusHype
Fast Point Cloud Geometry Compression with Context-based Residual Coding and INR-based RefinementCode0
Synaptic Modulation using Interspike Intervals Increases Energy Efficiency of Spiking Neural Networks0
Self-Supervised Learning for Multi-Channel Neural Transducer0
DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training Quantization for Vision Transformers0
Winning Amazon KDD Cup'240
HQOD: Harmonious Quantization for Object DetectionCode0
Nonlinear Perturbation-based Non-Convex Optimization over Time-Varying Networks0
An approach to optimize inference of the DIART speaker diarization pipeline0
STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs0
HMDN: Hierarchical Multi-Distribution Network for Click-Through Rate Prediction0
UniMoT: Unified Molecule-Text Language Model with Discrete Token Representation0
Reclaiming Residual Knowledge: A Novel Paradigm to Low-Bit Quantization0
CDFGNN: a Systematic Design of Cache-based Distributed Full-Batch Graph Neural Network Training with Communication Reduction0
Exploiting Change Blindness for Video Coding: Perspectives from a Less Promising User Study0
A Simple Low-bit Quantization Framework for Video Snapshot Compressive ImagingCode0
On the Perturbed States for Transformed Input-robust Reinforcement LearningCode0
Breaking the Hourglass Phenomenon of Residual Quantization: Enhancing the Upper Bound of Generative Retrieval0
Abstractive summarization from Audio Transcription0
Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object DetectionCode3
ThinK: Thinner Key Cache by Query-Driven Pruning0
Palu: Compressing KV-Cache with Low-Rank ProjectionCode2
Pruning Large Language Models with Semi-Structural Adaptive Sparse TrainingCode1
MimiQ: Low-Bit Data-Free Quantization of Vision Transformers with Encouraging Inter-Head Attention Similarity0
Model Agnostic Hybrid Sharding For Heterogeneous Distributed Inference0
Temporal Feature Matters: A Framework for Diffusion Model QuantizationCode2
Reputation-Driven Asynchronous Federated Learning for Enhanced Trajectory Prediction with Blockchain0
The Interpretability of Codebooks in Model-Based Reinforcement Learning is Limited0
Mixed Non-linear Quantization for Vision TransformersCode0
Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers0
Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models0
Accurate and Efficient Fine-Tuning of Quantized Large Language Models Through Optimal BalanceCode0
Low dimensional representation of multi-patient flow cytometry datasets using optimal transport for minimal residual disease detection in leukemiaCode0
Pixel Embedding: Fully Quantized Convolutional Neural Network with Differentiable Lookup Table0
Comprehensive Study on Performance Evaluation and Optimization of Model Compression: Bridging Traditional Deep Learning and Large Language Models0
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners0
Differentiable Product Quantization for Memory Efficient Camera RelocalizationCode0
Uplink Transmit Power Optimization for Distributed Massive MIMO Systems with 1-Bit ADCs0
Power Measurement Enabled Channel Autocorrelation Matrix Estimation for IRS-Assisted Wireless Communication0
MetaAug: Meta-Data Augmentation for Post-Training QuantizationCode0
FedDM: Enhancing Communication Efficiency and Handling Data Heterogeneity in Federated Diffusion Models0
A Benchmark for Gaussian Splatting Compression and Quality Assessment StudyCode1
Mixed-precision Neural Networks on RISC-V Cores: ISA extensions for Multi-Pumped Soft SIMD OperationsCode1
Mixture of Experts with Mixture of Precisions for Tuning Quality of Service0
Asymptotically Optimal Closed-Form Phase Configuration of 1-bit RISs via Sign Alignment0
LiNR: Model Based Neural Retrieval on GPUs at LinkedIn0
MCU-MixQ: A HW/SW Co-optimized Mixed-precision Neural Network Design Framework for MCUs0
SmartQuant: CXL-based AI Model Store in Support of Runtime Configurable Weight Quantization0
AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm QuantizerCode1
Spectra: Surprising Effectiveness of Pretraining Ternary Language Models at ScaleCode2
Toward INT4 Fixed-Point Training via Exploring Quantization Error for Gradients0
Show:102550
← PrevPage 23 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified