SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 25262550 of 4925 papers

TitleStatusHype
QUENN: QUantization Engine for low-power Neural Networks0
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search0
QUIC-FL: Quick Unbiased Compression for Federated Learning0
QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms0
QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration0
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning0
QuPeL: Quantized Personalization with Applications to Federated Learning0
QVD: Post-training Quantization for Video Diffusion Models0
Qwen2.5-32B: Leveraging Self-Consistent Tool-Integrated Reasoning for Bengali Mathematical Olympiad Problem Solving0
Q-YOLO: Efficient Inference for Real-time Object Detection0
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception0
R2 Loss: Range Restriction Loss for Model Compression and Quantization0
Radio: Rate-Distortion Optimization for Large Language Model Compression0
RAG-based User Profiling for Precision Planning in Mixed-precision Over-the-Air Federated Learning0
Random Binary Mappings for Kernel Learning and Efficient SVM0
Random Projections with Asymmetric Quantization0
Random VLAD based Deep Hashing for Efficient Image Retrieval0
Random Walk Graph Laplacian based Smoothness Prior for Soft Decoding of JPEG Images0
RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models0
Rapid Deployment of Domain-specific Hyperspectral Image Processors with Application to Autonomous Driving0
Rapid yet accurate Tile-circuit and device modeling for Analog In-Memory Computing0
Rate-aware Compression for NeRF-based Volumetric Video0
Rate-Constrained Quantization for Communication-Efficient Federated Learning0
Rate-Distortion-Cognition Controllable Versatile Neural Image Compression0
Rate distortion comparison of a few gradient quantizers0
Show:102550
← PrevPage 102 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified