SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 5175 of 4925 papers

TitleStatusHype
Relative Entropy Regularized Reinforcement Learning for Efficient Encrypted Policy Synthesis0
FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix ApproximationCode1
Deep Learning Model Acceleration and Optimization Strategies for Real-Time Recommendation Systems0
GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers0
MNN-LLM: A Generic Inference Engine for Fast Large Language Model Deployment on Mobile Devices0
Starting Positions Matter: A Study on Better Weight Initialization for Neural Network Quantization0
Post-Training Quantization for Video Matting0
Discrete Audio Tokens: More Than a Survey!0
SLED: A Speculative LLM Decoding Framework for Efficient Edge Serving0
Q-SAM2: Accurate Quantization for Segment Anything Model 20
HadaNorm: Diffusion Transformer Quantization through Mean-Centered Transformations0
AWP: Activation-Aware Weight Pruning and Quantization with Projected Gradient Descent0
Hardware Limitations and Optimization Approach in 1-Bit RIS Design at 28 GHz0
Implementing Keyword Spotting on the MCUX947 Microcontroller with Integrated NPU0
POLARON: Precision-aware On-device Learning and Adaptive Runtime-cONfigurable AI acceleration0
Optimizing Learned Image Compression on Scalar and Entropy-Constraint Quantization0
Decentralized Optimization on Compact Submanifolds by Quantized Riemannian Gradient Tracking0
Evaluating Large Language Models on the Frame and Symbol Grounding Problems: A Zero-shot BenchmarkCode0
BitVLA: 1-bit Vision-Language-Action Models for Robotics ManipulationCode2
LiteVLM: A Low-Latency Vision-Language Model Inference Pipeline for Resource-Constrained Environments0
Highly Compressed Tokenizer Can Generate Without TrainingCode3
Auditing Black-Box LLM APIs with a Rank-Based Uniformity Test0
QForce-RL: Quantized FPGA-Optimized Reinforcement Learning Compute Engine0
Enabling On-Device Medical AI Assistants via Input-Driven Saliency Adaptation0
Towards AI-Native Fronthaul: Neural Compression for NextG Cloud RAN0
Show:102550
← PrevPage 3 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified