SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 14511500 of 4925 papers

TitleStatusHype
ProFe: Communication-Efficient Decentralized Federated Learning via Distillation and Prototypes0
Nanoscaling Floating-Point (NxFP): NanoMantissa, Adaptive Microexponents, and Code Recycling for Direct-Cast Compression of Large Language Models0
Progressive Compression with Universally Quantized Diffusion Models0
Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks0
TinySubNets: An efficient and low capacity continual learning strategyCode0
Enhancing Off-Grid One-Bit DOA Estimation with Learning-Based Sparse Bayesian Approach for Non-Uniform Sparse Array0
Memory-Efficient 4-bit Preconditioned Stochastic Optimization0
Efficient Generative Modeling with Residual Vector Quantization-Based Tokens0
MVQ:Towards Efficient DNN Compression and Acceleration with Masked Vector Quantization0
TTAQ: Towards Stable Post-training Quantization in Continuous Domain Adaptation0
VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization0
Panacea: Novel DNN Accelerator using Accuracy-Preserving Asymmetric Quantization and Energy-Saving Bit-Slice Sparsity0
On Round-Off Errors and Gaussian Blur in Superresolution and in Image Registration0
DQA: An Efficient Method for Deep Quantization of Deep Neural Network Activations0
Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices0
CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs0
Breaking the Bias: Recalibrating the Attention of Industrial Anomaly Detection0
TurboAttention: Efficient Attention Approximation For High Throughputs LLMs0
Low-Rank Correction for Quantized LLMs0
QuantFormer: Learning to Quantize for Neural Activity Forecasting in Mouse Visual Cortex0
Post-Training Non-Uniform Quantization for Convolutional Neural Networks0
Machine learning-driven conservative-to-primitive conversion in hybrid piecewise polytropic and tabulated equations of state0
Compression for Better: A General and Stable Lossless Compression Framework0
Efficiency Meets Fidelity: A Novel Quantization Framework for Stable Diffusion0
FP=xINT:A Low-Bit Series Expansion Algorithm for Post-Training Quantization0
Federated Split Learning with Model Pruning and Gradient Quantization in Wireless Networks0
Fuzzy Norm-Explicit Product Quantization for Recommender Systems0
Vision Transformer-based Semantic Communications With Importance-Aware Quantization0
SizeGS: Size-aware Compression of 3D Gaussians with Hierarchical Mixed Precision Quantization0
Taming Sensitive Weights : Noise Perturbation Fine-tuning for Robust LLM Quantization0
Error Feedback Approach for Quantization Noise Reduction of Distributed Graph Filters0
Sensor Selection and Distributed Quantization for Energy Efficiency in Massive MTC0
GAQAT: gradient-adaptive quantization-aware training for domain generalization0
Efficient Distributed Training through Gradient Compression with Sparsification and Quantization Techniques0
Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search0
ULMRec: User-centric Large Language Model for Sequential Recommendation0
SKIM: Any-bit Quantization Pushing The Limits of Post-Training Quantization0
Quantized and Interpretable Learning Scheme for Deep Neural Networks in Classification Task0
Unifying KV Cache Compression for Large Language Models with LeanKV0
FlashAttention on a Napkin: A Diagrammatic Approach to Deep Learning IO-Awareness0
Prompting Large Language Models for Clinical Temporal Relation Extraction0
Designing DNNs for a trade-off between robustness and processing performance in embedded devices0
Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspectiveCode0
Mixed-Precision Quantization: Make the Best Use of Bits Where They Matter Most0
CPTQuant -- A Novel Mixed Precision Post-Training Quantization Techniques for Large Language Models0
3D representation in 512-Byte:Variational tokenizer is the key for autoregressive 3D generation0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
Robust Precoding for Multi-User Visible Light Communications with Quantized Channel Information0
Scaling Image Tokenizers with Grouped Spherical QuantizationCode0
Lean classical-quantum hybrid neural network model for image classification0
Show:102550
← PrevPage 30 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified