SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 23012350 of 4925 papers

TitleStatusHype
Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers0
Challenging GPU Dominance: When CPUs Outperform for On-Device LLM Inference0
Empirical Evaluation of Post-Training Quantization Methods for Language Tasks0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM0
Emotion Recognition Using Speaker Cues0
Emergent Quantized Communication0
Embedding Compression with Isotropic Iterative Quantization0
Cell growth rate dictates the onset of glass to fluid-like transition and long time super-diffusion in an evolving cell colony0
ANTLER: Bayesian Nonlinear Tensor Learning and Modeler for Unstructured, Varying-Size Point Cloud Data0
Adaptive Periodic Averaging: A Practical Approach to Reducing Communication in Distributed Learning0
Embedding Compression for Efficient Re-Identification0
Embedded Phase Shifting: Robust Phase Shifting With Embedded Signals0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
ELMGS: Enhancing memory and computation scaLability through coMpression for 3D Gaussian Splatting0
CEG4N: Counter-Example Guided Neural Network Quantization Refinement0
Elastic Significant Bit Quantization and Acceleration for Deep Neural Networks0
CDQuant: Greedy Coordinate Descent for Accurate LLM Quantization0
Ef-QuantFace: Streamlined Face Recognition with Small Data and Low-Bit Precision0
EfQAT: An Efficient Framework for Quantization-Aware Training0
CDC: Classification Driven Compression for Bandwidth Efficient Edge-Cloud Collaborative Deep Learning0
An Overview on IEEE 802.11bf: WLAN Sensing0
Efficient-VQGAN: Towards High-Resolution Image Generation with Efficient Vision Transformers0
CBQ: Cross-Block Quantization for Large Language Models0
Efficient Vision-based Vehicle Speed Estimation0
Causal Speech Enhancement with Predicting Semantics based on Quantized Self-supervised Learning Features0
An Overview of Neural Network Compression0
Efficient Systolic Array Based on Decomposable MAC for Quantized Deep Neural Networks0
Efficient Super Resolution Using Binarized Neural Network0
An Overview of Datatype Quantization Techniques for Convolutional Neural Networks0
Adaptive Low-Precision Training for Embeddings in Click-Through Rate Prediction0
Starting Positions Matter: A Study on Better Weight Initialization for Neural Network Quantization0
Efficient Storage of Fine-Tuned Models via Low-Rank Approximation of Weight Residuals0
Efficient Speech Representation Learning with Low-Bit Quantization0
Efficient Approximate Search for Sets of Vectors0
Efficient Quantum Approximate kNN Algorithm via Granular-Ball Computing0
A Novel Unified Model for Multi-exposure Stereo Coding Based on Low Rank Tucker-ALS and 3D-HEVC0
Efficient Quantization Strategies for Latent Diffusion Models0
Can Large Language Models Understand Context?0
A Novel Structure-Agnostic Multi-Objective Approach for Weight-Sharing Compression in Deep Neural Networks0
Efficient Point Transformer for Large-scale 3D Scene Understanding0
Efficient On-the-fly Category Retrieval using ConvNets and GPUs0
Can General-Purpose Large Language Models Generalize to English-Thai Machine Translation ?0
A Novel Physics-based Channel Model for Reconfigurable Intelligent Surface-assisted Multi-user Communication Systems0
Adaptive Joint Optimization for 3D Reconstruction with Differentiable Rendering0
Efficient Neural PDE-Solvers using Quantization Aware Training0
Efficient Neural Networks for Tiny Machine Learning: A Comprehensive Review0
Cancer Subtyping via Embedded Unsupervised Learning on Transcriptomics Data0
Efficient Neural Compression with Inference-time Decoding0
CAMBI: Contrast-aware Multiscale Banding Index0
Show:102550
← PrevPage 47 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified