SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 44014425 of 4925 papers

TitleStatusHype
Low-complexity acoustic scene classification for multi-device audio: analysis of DCASE 2021 Challenge systemsCode0
Zero-Shot Dynamic Quantization for Transformer InferenceCode0
Differentiable Product Quantization for Memory Efficient Camera RelocalizationCode0
Low-bit Quantization of Neural Networks for Efficient InferenceCode0
NIF: A Fast Implicit Image Compression with Bottleneck Layers and Modulated Sinusoidal ActivationsCode0
Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message PropagationCode0
NIRVANA: Neural Implicit Representations of Videos with Adaptive Networks and Autoregressive Patch-wise ModelingCode0
NITRO-D: Native Integer-only Training of Deep Convolutional Neural NetworksCode0
Noise Invariant Frame Selection: A Simple Method to Address the Background Noise Problem for Text-independent Speaker VerificationCode0
Differentiable Fine-grained Quantization for Deep Neural Network CompressionCode0
Device-friendly Guava fruit and leaf disease detection using deep learningCode0
Towards Accurate Post-training Quantization for Reparameterized ModelsCode0
NoisyDECOLLE: Robust Local Learning for SNNs on Neuromorphic HardwareCode0
Development, Optimization, and Deployment of Thermal Forward Vision Systems for Advance Vehicular Applications on Edge DevicesCode0
Low-bit Model Quantization for Deep Neural Networks: A SurveyCode0
LoTA-QAF: Lossless Ternary Adaptation for Quantization-Aware Fine-TuningCode0
Sub-token ViT Embedding via Stochastic Resonance TransformersCode0
What if Adversarial Samples were Digital ImagesCode0
Summary Statistic Privacy in Data SharingCode0
Exploiting vulnerabilities of deep neural networks for privacy protectionCode0
Victoria Amazonica Optimization (VAO): An Algorithm Inspired by the Giant Water Lily PlantCode0
Detection of Structural Change in Geographic Regions of Interest by Self Organized Mapping: Las Vegas City and Lake Mead across the YearsCode0
Loss Landscape Analysis for Reliable Quantized ML Models for Scientific SensingCode0
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of PerturbationsCode0
VideoBERT: A Joint Model for Video and Language Representation LearningCode0
Show:102550
← PrevPage 177 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified