SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 33263350 of 4925 papers

TitleStatusHype
ML-EXray: Visibility into ML Deployment on the Edge0
Rethinking Deconvolution for 2D Human Pose Estimation Light yet Accurate Model for Real-time Edge Computing0
LW-GCN: A Lightweight FPGA-based Graph Convolutional Network Accelerator0
Constructing High-Order Signed Distance Maps from Computed Tomography Data with Application to Bone Morphometry0
Simple and Effective Unsupervised Redundancy Elimination to Compress Dense Vectors for Passage Retrieval0
Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection0
HW-TSC’s Participation in the WMT 2021 Efficiency Shared Task0
PP-ShiTu: A Practical Lightweight Image Recognition SystemCode0
Efficient Machine Translation with Model Pruning and Quantization0
Revealing and Protecting Labels in Distributed TrainingCode0
Reconfigurable Intelligent Surface-induced Randomness for mmWave Key Generation0
DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning0
ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA0
RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions0
Nash equilibrium of multi-agent graphical game with a privacy information encrypted learning algorithm0
FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding0
MERCURY: Accelerating DNN Training By Exploiting Input Similarity0
Differential Deep Detection in Massive MIMO With One-Bit ADC0
High-Order Signed Distance Transform of Sampled Signals0
Algorithms for the Communication of Samples0
Demystifying and Generalizing BinaryConnect0
Deep Asymmetric Hashing with Dual Semantic Regression and Class Structure Quantization0
Task-Based Graph Signal CompressionCode0
A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness0
Vis-TOP: Visual Transformer Overlay Processor0
Show:102550
← PrevPage 134 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified