SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 34513475 of 4925 papers

TitleStatusHype
Kramers-Kronig Receiver Combined With Digital Resolution Enhancer0
CREW: Computation Reuse and Efficient Weight Storage for Hardware-accelerated MLPs and RNNs0
DHNet: Double MPEG-4 Compression Detection via Multiple DCT Histograms0
Support Recovery in Universal One-bit Compressed Sensing0
A High-Performance Adaptive Quantization Approach for Edge CNN Applications0
Deep Learning to Ternary Hash Codes by Continuation0
Continuous-variable neural-network quantum states and the quantum rotor modelCode0
MAFAT: Memory-Aware Fusing and Tiling of Neural Networks for Accelerated Edge Inference0
Efficient Approximate Search for Sets of Vectors0
HEMP: High-order Entropy Minimization for neural network comPression0
LANA: Latency Aware Network Acceleration0
Regularized Classification-Aware QuantizationCode0
Model compression as constrained optimization, with application to neural nets. Part V: combining compressions0
Image restoration quality assessment based on regional differential information entropy0
An Embedded Iris Recognition System Optimization using Dynamically ReconfigurableDecoder with LDPC Codes0
Patch-Wise Spatial-Temporal Quality Enhancement for HEVC Compressed VideoCode0
Discrete-Valued Neural Communication0
Deep Learning Methods for Joint Optimization of Beamforming and Fronthaul Quantization in Cloud Radio Access Networks0
Q-SpiNN: A Framework for Quantizing Spiking Neural Networks0
Multi-modality Deep Restoration of Extremely Compressed Face Videos0
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust Neural Acoustic Scene Classification0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
Orthonormal Product Quantization Network for Scalable Face Image RetrievalCode0
Power Law Graph Transformer for Machine Translation and Representation LearningCode0
Post-Training Quantization for Vision Transformer0
Show:102550
← PrevPage 139 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified