SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 201250 of 4925 papers

TitleStatusHype
Turbo-ICL: In-Context Learning-Based Turbo Equalization0
MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-DesignCode1
LiteLMGuard: Seamless and Lightweight On-Device Prompt Filtering for Safeguarding Small Language Models against Quantization-induced Risks and VulnerabilitiesCode0
Low-bit Model Quantization for Deep Neural Networks: A SurveyCode0
ReactDance: Progressive-Granular Representation for Long-Term Coherent Reactive Dance Generation0
Mix-QSAM: Mixed-Precision Quantization of the Segment Anything Model0
Diffusion Model Quantization: A ReviewCode2
TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and GenerationCode3
Learning from Loss Landscape: Generalizable Mixed-Precision Quantization via Adaptive Sharpness-Aware Gradient Aligning0
RGB-Event Fusion with Self-Attention for Collision PredictionCode1
On-Device LLM for Context-Aware Wi-Fi RoamingCode0
3D Gaussian Splatting Data Compression with Mixture of Priors0
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs0
Lightweight Clinical Decision Support System using QLoRA-Fine-Tuned LLMs and Retrieval-Augmented Generation0
Rapid yet accurate Tile-circuit and device modeling for Analog In-Memory Computing0
End-to-end fully-binarized network design: from Generic Learned Thermometer to Block Pruning0
Radio: Rate-Distortion Optimization for Large Language Model Compression0
EntroLLM: Entropy Encoded Weight Compression for Efficient Large Language Model Inference on Edge Devices0
Bielik 11B v2 Technical Report0
RobSurv: Vector Quantization-Based Multi-Modal Learning for Robust Cancer Survival Prediction0
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques0
Quantitative Analysis of Performance Drop in DeepSeek Model QuantizationCode0
NeuroSim V1.5: Improved Software Backbone for Benchmarking Compute-in-Memory Accelerators with Device and Circuit-level Non-idealitiesCode0
An Empirical Study of Qwen3 QuantizationCode2
Quantizing Diffusion Models from a Sampling-Aware Perspective0
PASCAL: Precise and Efficient ANN- SNN Conversion using Spike Accumulation and Adaptive Layerwise Activation0
Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth0
Grouped Sequency-arranged Rotation: Optimizing Rotation Transformation for Quantization for Free0
LMDepth: Lightweight Mamba-based Monocular Depth Estimation for Real-World Deployment0
Efficient Vision-based Vehicle Speed Estimation0
Aggregating empirical evidence from data strategy studies: a case on model quantization0
Optimizing Deep Neural Networks using Safety-Guided Self CompressionCode0
Fast and Low-Cost Genomic Foundation Models via Outlier RemovalCode1
Generative QoE Modeling: A Lightweight Approach for Telecom Networks0
Precision Where It Matters: A Novel Spike Aware Mixed-Precision Quantization Strategy for LLaMA-based Language Models0
Optimization of embeddings storage for RAG systems using quantization and dimensionality reduction techniques0
Softpick: No Attention Sink, No Massive Activations with Rectified SoftmaxCode2
Clustering-Based Evolutionary Federated Multiobjective Optimization and Learning0
APG-MOS: Auditory Perception Guided-MOS Predictor for Synthetic Speech0
TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate0
FineQ: Software-Hardware Co-Design for Low-Bit Fine-Grained Mixed-Precision Quantization of LLMs0
Partition Map-Based Fast Block Partitioning for VVC Inter CodingCode0
Pushing the boundary on Natural Language Inference0
Fast Autoregressive Models for Continuous Latent Generation0
Precision Neural Network Quantization via Learnable Adaptive Modules0
On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration0
Distributed Optimization with Efficient Communication, Event-Triggered Solution Enhancement, and Operation Stopping0
Hexcute: A Tile-based Programming Language with Automatic Layout and Task-Mapping Synthesis0
TeLLMe: An Energy-Efficient Ternary LLM Accelerator for Prefilling and Decoding on Edge FPGAs0
A LoRA-Based Approach to Fine-Tuning LLMs for Educational Guidance in Resource-Constrained SettingsCode0
Show:102550
← PrevPage 5 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified