SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 27012725 of 4925 papers

TitleStatusHype
Design Methodology for Deep Out-of-Distribution Detectors in Real-Time Cyber-Physical SystemsCode1
CrAM: A Compression-Aware MinimizerCode1
Vector Quantized Image-to-Image Translation0
Adaptive Asymmetric Label-guided Hashing for Multimedia Search0
Reconciling Security and Communication Efficiency in Federated LearningCode1
Low-complexity CNNs for Acoustic Scene Classification0
HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation0
Convergence Theory of Generalized Distributed Subgradient Method with Random Quantization0
Quantized Sparse Weight Decomposition for Neural Network Compression0
Characterizing Coherent Integrated Photonic Neural Networks under Imperfections0
Auto-regressive Image Synthesis with Integrated Quantization0
CADyQ: Content-Aware Dynamic Quantization for Image Super-ResolutionCode1
Mixed-Precision Inference Quantization: Radically Towards Faster inference speed, Lower Storage requirement, and Lower Loss0
Quantized Training of Gradient Boosting Decision TreesCode6
Bitwidth-Adaptive Quantization-Aware Neural Network Training: A Meta-Learning ApproachCode1
Animation from Blur: Multi-modal Blur Decomposition with Motion GuidanceCode1
Green, Quantized Federated Learning over Wireless Networks: An Energy-Efficient Design0
Context Unaware Knowledge Distillation for Image RetrievalCode0
RepBNN: towards a precise Binary Neural Network with Enhanced Feature Map via RepeatingCode0
FewGAN: Generating from the Joint Distribution of a Few Images0
Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime0
Is Integer Arithmetic Enough for Deep Learning Training?0
Quantized Consensus under Data-Rate Constraints and DoS Attacks: A Zooming-In and Holding Approach0
Latent-Domain Predictive Neural Speech Coding0
Optimal Database Allocation in Finite Time with Efficient Communication and Transmission Stopping over Dynamic Networks0
Show:102550
← PrevPage 109 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified