SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 47764800 of 4925 papers

TitleStatusHype
On the efficient representation and execution of deep acoustic models0
Intra-layer Nonuniform Quantization for Deep Convolutional Neural Network0
Random Walk Graph Laplacian based Smoothness Prior for Soft Decoding of JPEG Images0
Adaptive Training of Random Mapping for Data Quantization0
Fast, Compact, and High Quality LSTM-RNN Based Statistical Parametric Speech Synthesizers for Mobile Devices0
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth GradientsCode0
Deep neural networks are robust to weight binarization and other non-linear distortions0
Learning Power Spectrum Maps from Quantized Power Measurements0
Pairwise Quantization0
Comparison of 14 different families of classification algorithms on 115 binary datasets0
Learning compact binary descriptors with unsupervised deep neural networksCode0
Shortlist Selection With Residual-Aware Distance Estimator for K-Nearest Neighbor Search0
Mining 3D Key-Pose-Motifs for Action Recognition0
HyperDepth: Learning Depth From Structured Light Without Matching0
Multilinear Hyperplane Hashing0
Learning Compact Binary Descriptors With Unsupervised Deep Neural Networks0
A Survey on Learning to Hash0
TripleSpin - a generic compact paradigm for fast machine learning computations0
A Channelized Binning Method for Extraction of Dominant Color Pixel Value0
Composite Correlation Quantization for Efficient Multimodal Retrieval0
Reducing the Model Order of Deep Neural Networks Using Information Theory0
Transfer Hashing with Privileged Information0
LOH and behold: Web-scale visual search, recommendation and clustering using Locally Optimized Hashing0
Supervised Matrix Factorization for Cross-Modality Hashing0
Grid Based Nonlinear Filtering Revisited: Recursive Estimation & Asymptotic Optimality0
Show:102550
← PrevPage 192 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified