SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 501550 of 4925 papers

TitleStatusHype
Survey of Quantization Techniques for On-Device Vision-based Crack Detection0
Unlocking Efficient Large Inference Models: One-Bit Unrolling Tips the Scales0
Massive Values in Self-Attention Modules are the Key to Contextual Knowledge UnderstandingCode2
Choose Your Model Size: Any Compression by a Single Gradient Descent0
QLESS: A Quantized Approach for Data Valuation and Selection in Large Language Model Fine-TuningCode0
Continuous Autoregressive Modeling with Stochastic Monotonic Alignment for Speech Synthesis0
An Inquiry into Datacenter TCO for LLM Inference with FP80
Nearly Lossless Adaptive Bit SwitchingCode0
Structural Latency Perturbation in Large Language Models Through Recursive State Induction0
Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference0
On Noncommutative Quantum Mechanics and the Black-Scholes Model0
MQuant: Unleashing the Inference Potential of Multimodal Large Language Models via Full Static Quantization0
Enhancing Field-Oriented Control of Electric Drives with Tiny Neural Network Optimized for Micro-controllers0
LLM-based Affective Text Generation Quality Based on Different Quantization Values0
Visual Autoregressive Modeling for Image Super-ResolutionCode2
Fully Distributed and Quantized Algorithm for MPC-based Autonomous Vehicle Platooning Optimization0
Cache Me If You Must: Adaptive Key-Value Quantization for Large Language ModelsCode1
Mixed-Precision Graph Neural Quantization for Low Bit Large Language Models0
CodeBrain: Impute Any Brain MRI via Instance-specific Scalar-quantized Codes0
Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation0
Post-Training Quantization for 3D Medical Image Segmentation: A Practical Study on Real Inference EnginesCode0
Post-Training Quantization for Vision Mamba with k-Scaled Quantization and Reparameterization0
EdgeMLOps: Operationalizing ML models with Cumulocity IoT and thin-edge.io for Visual quality Inspection0
Optimizing Large Language Model Training Using FP4 Quantization0
Stabilization of an unstable reaction-diffusion PDE with input delay despite state and input quantization0
One-Bit Sigma-Delta DFRC Waveform Design: Using Quantization Noise for Radar Probing0
SQ-DM: Accelerating Diffusion Models with Aggressive Quantization and Temporal Sparsity0
Decentralized Low-Rank Fine-Tuning of Large Language Models0
GaussianToken: An Effective Image Tokenizer with 2D Gaussian SplattingCode2
FBQuant: FeedBack Quantization for Large Language Models0
RotateKV: Accurate and Robust 2-Bit KV Cache Quantization for LLMs via Outlier-Aware Adaptive Rotations0
AKVQ-VL: Attention-Aware KV Cache Adaptive 2-Bit Quantization for Vision-Language Models0
On Accelerating Edge AI: Optimizing Resource-Constrained Environments0
SwiftPrune: Hessian-Free Weight Pruning for Large Language Models0
Channel-Aware Constellation Design for Digital OTA Computation0
End-to-end workflow for machine learning-based qubit readout with QICK and hls4ml0
On Hardening DNNs against Noisy Computations0
OstQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution FittingCode2
Qrazor: Reliable and effortless 4-bit llm quantization by significant data razoring0
QMamba: Post-Training Quantization for Vision State Space Models0
MambaQuant: Quantizing the Mamba Family with Variance Aligned Rotation Methods0
Diffusion-based Perceptual Neural Video Compression with Temporal Diffusion Information Reuse0
DQ-Data2vec: Decoupling Quantization for Multilingual Speech Recognition0
Quantized Spike-driven TransformerCode1
HEPPO: Hardware-Efficient Proximal Policy Optimization -- A Universal Pipelined Architecture for Generalized Advantage Estimation0
Irrational Complex Rotations Empower Low-bit Optimizers0
Sketch and Patch: Efficient 3D Gaussian Representation for Man-Made Scenes0
GANQ: GPU-Adaptive Non-Uniform Quantization for Large Language ModelsCode0
SplitQuant: Layer Splitting for Low-Bit Neural Network Quantization0
HAC++: Towards 100X Compression of 3D Gaussian SplattingCode3
Show:102550
← PrevPage 11 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified