SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 21512200 of 4925 papers

TitleStatusHype
Magnificent Minified Models0
ZeRO++: Extremely Efficient Collective Communication for Giant Model Training0
HiNeRV: Video Compression with Hierarchical Encoding-based Neural RepresentationCode1
Evaluation and Optimization of Gradient Compression for Distributed Deep LearningCode1
Neural Network Compression using Binarization and Few Full-Precision Weights0
PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with Dual-DiscriminatorsCode0
High-performance deep spiking neural networks with 0.3 spikes per neuron0
GQFedWAvg: Optimization-Based Quantized Federated Learning in General Edge Computing SystemsCode0
INT2.1: Towards Fine-Tunable Quantized Large Language Models with Error Correction through Low-Rank AdaptationCode4
SqueezeLLM: Dense-and-Sparse QuantizationCode6
Discrete Graph Auto-Encoder0
MFSN: Multi-perspective Fusion Search Network For Pre-training Knowledge in Speech Emotion Recognition0
NF4 Isn't Information Theoretically Optimal (and that's Good)Code1
Resource Efficient Neural Networks Using Hessian Based Pruning0
Efficient and Robust Quantization-aware Training via Adaptive Coreset SelectionCode1
Sparse-Inductive Generative Adversarial Hashing for Nearest Neighbor Search0
High-Fidelity Audio Compression with Improved RVQGANCode3
End-to-End Neural Network Compression via _1_2 Regularized Latency Surrogates0
Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor DecompositionCode0
Precision-aware Latency and Energy Balancing on Multi-Accelerator Platforms for DNN Inference0
Iterative Signal Processing for Integrated Sensing and Communication Systems0
Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization0
MobileNMT: Enabling Translation in 15MB and 30msCode1
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight CompressionCode2
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware0
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language ModelsCode1
Temporal Dynamic Quantization for Diffusion Models0
Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference0
An Information-Theoretic Analysis of Self-supervised Discrete Representations of SpeechCode0
Binary and Ternary Natural Language GenerationCode1
Group channel pruning and spatial attention distilling for object detection0
Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN TrainingCode1
Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding0
Towards Learning Discrete Representations via Self-Supervision for Wearables-Based Human Activity Recognition0
FlexRound: Learnable Rounding based on Element-wise Division for Post-Training QuantizationCode0
On the Effectiveness of Hybrid Mutual Information Estimation0
Dynamic quantized consensus under DoS attacks: Towards a tight zooming-out factor0
AWQ: Activation-aware Weight Quantization for LLM Compression and AccelerationCode6
Asymptotic Performance Analysis of Large-Scale Active IRS-Aided Wireless Network0
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANNCode1
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised TrainingCode2
Compression with Bayesian Implicit Neural RepresentationsCode1
AdANNS: A Framework for Adaptive Semantic SearchCode1
PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models0
Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function0
Implementation of a framework for deploying AI inference engines in FPGAs0
Intriguing Properties of Quantization at Scale0
Towards Accurate Post-training Quantization for Diffusion ModelsCode1
Stochastic Gradient Langevin Dynamics Based on Quantization with Increasing Resolution0
Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees0
Show:102550
← PrevPage 44 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified