SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 19512000 of 4925 papers

TitleStatusHype
Bit Efficient Quantization for Deep Neural Networks0
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
A blob method for inhomogeneous diffusion with applications to multi-agent control and sampling0
Generative Design of Hardware-aware DNNs0
Generative Diffusion Models for Lattice Field Theory0
Highly Efficient and Effective LLMs with Multi-Boolean Architectures0
Generative QoE Modeling: A Lightweight Approach for Telecom Networks0
Generative Semantic Communication for Text-to-Speech Synthesis0
High-Perceptual Quality JPEG Decoding via Posterior Sampling0
HiKonv: High Throughput Quantized Convolution With Novel Bit-wise Management and Computation0
Convergence Rates for Regularized Optimal Transport via Quantization0
DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models0
A Biresolution Spectral Framework for Product Quantization0
Geometry and clustering with metrics derived from separable Bregman divergences0
Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-training0
Getting Free Bits Back from Rotational Symmetries in LLMs0
Don't Waste Your Bits! Squeeze Activations and Gradients for Deep Neural Networks via TinyScript0
GHN-Q: Parameter Prediction for Unseen Quantized Convolutional Architectures via Graph Hypernetworks0
BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation0
GIF2Video: Color Dequantization and Temporal Interpolation of GIF images0
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms0
Givens Coordinate Descent Methods for Rotation Matrix Learning in Trainable Embedding Indexes0
High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning0
Don't Fear the Bit Flips: Optimized Coding Strategies for Binary Classification0
Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees0
Global synchronization of multi-agent systems with nonlinear interactions0
Goal-oriented compression for L_p-norm-type goal functions: Application to power consumption scheduling0
Goal-Oriented Quantization: Analysis, Design, and Application to Resource Allocation0
GOAT-TTS: Expressive and Realistic Speech Generation via A Dual-Branch LLM0
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference0
Going Below and Beyond, Off-the-Grid Velocity Estimation from 1-bit Radar Measurements0
Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tile0
Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages0
Coordinated Per-Antenna Power Minimization for Multicell Massive MIMO Systems with Low-Resolution Data Converters0
gpcgc: a green point cloud geometry coding method0
GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers0
Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization0
GPTQT: Quantize Large Language Models Twice to Push the Efficiency0
BiSup: Bidirectional Quantization Error Suppression for Large Language Models0
GPTVQ: The Blessing of Dimensionality for LLM Quantization0
Correlated quantization for distributed mean estimation and optimization0
GQ-Net: Training Quantization-Friendly Deep Networks0
GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference0
GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision Neural Networks0
AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training0
WaveQ: Gradient-Based Deep Quantization of Neural Networks through Sinusoidal Adaptive Regularization0
Gradient Based Method for the Fusion of Lattice Quantizers0
Gradient-Based Post-Training Quantization: Challenging the Status Quo0
Gradient Descent Quantizes ReLU Network Features0
Does Video Compression Impact Tracking Accuracy?0
Show:102550
← PrevPage 40 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified