SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 150 of 4925 papers

TitleStatusHype
Qwen2 Technical ReportCode13
IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech SystemCode11
CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language ModelsCode11
SWIFT:A Scalable lightWeight Infrastructure for Fine-TuningCode11
CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic TokensCode11
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precisionCode11
OpenVLA: An Open-Source Vision-Language-Action ModelCode9
SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 QuantizationCode7
Chronos: Learning the Language of Time SeriesCode7
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference AccelerationCode7
From Audio to Photoreal Embodiment: Synthesizing Humans in ConversationsCode7
SageAttention2++: A More Efficient Implementation of SageAttention2Code7
Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image AnimationCode7
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained TransformersCode7
Chinese-Vicuna: A Chinese Instruction-following Llama-based ModelCode7
Semantic Routing for Enhanced Performance of LLM-Assisted Intent-Based 5G Core Network Management and OrchestrationCode7
Quantized Training of Gradient Boosting Decision TreesCode6
QLoRA: Efficient Finetuning of Quantized LLMsCode6
GLM-130B: An Open Bilingual Pre-trained ModelCode6
AWQ: Activation-aware Weight Quantization for LLM Compression and AccelerationCode6
SqueezeLLM: Dense-and-Sparse QuantizationCode6
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language ModelsCode6
CacheGen: KV Cache Compression and Streaming for Fast Large Language Model ServingCode5
Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank GradientsCode5
SpinQuant: LLM quantization with learned rotationsCode5
Extreme Compression of Large Language Models via Additive QuantizationCode5
BigDL 2.0: Seamless Scaling of AI Pipelines from Laptops to Distributed ClusterCode5
MoVQ: Modulating Quantized Vectors for High-Fidelity Image GenerationCode5
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM CompressionCode5
SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural NetworksCode5
Autoregressive Image Generation without Vector QuantizationCode5
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language ModelsCode5
Jamba-1.5: Hybrid Transformer-Mamba Models at ScaleCode5
MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language ModelsCode5
SCBench: A KV Cache-Centric Analysis of Long-Context MethodsCode5
YOLOv6: A Single-Stage Object Detection Framework for Industrial ApplicationsCode5
LLM.int8(): 8-bit Matrix Multiplication for Transformers at ScaleCode5
Restructuring Vector Quantization with the Rotation TrickCode4
QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice CodebooksCode4
FP8 Formats for Deep LearningCode4
QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM ServingCode4
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMsCode4
Efficient Post-training Quantization with FP8 FormatsCode4
Polysemous codesCode4
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-DistillationCode4
Fast Inference of Mixture-of-Experts Language Models with OffloadingCode4
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-ShotCode4
Billion-scale similarity search with GPUsCode4
DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming HeadsCode4
LLM Inference Unveiled: Survey and Roofline Model InsightsCode4
Show:102550
← PrevPage 1 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified