SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 701750 of 4925 papers

TitleStatusHype
Arch-Net: Model Distillation for Architecture Agnostic Model DeploymentCode1
Block-wise Word Embedding Compression Revisited: Better Weighting and StructuringCode1
Matching-oriented Embedding Quantization For Ad-hoc RetrievalCode1
VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector QuantizationCode1
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial OutcomesCode1
TOD: GPU-accelerated Outlier Detection via Tensor OperationsCode1
Convolutional Autoencoder-Based Phase Shift Feedback Compression for Intelligent Reflecting Surface-Assisted Wireless SystemsCode1
Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural NetworksCode1
Graph-less Neural Networks: Teaching Old MLPs New Tricks via DistillationCode1
BNAS v2: Learning Architectures for Binary Networks with Empirical ImprovementsCode1
Training Deep Neural Networks with Joint Quantization and Pruning of Weights and ActivationsCode1
SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language ProcessingCode1
Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense RetrievalCode1
LCS: Learning Compressible Subspaces for Adaptive Network Compression at Inference TimeCode1
Random matrices in service of ML footprint: ternary random features with no performance lossCode1
One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning ObjectiveCode1
Transformer-based Transform CodingCode1
Understanding and Overcoming the Challenges of Efficient Transformer QuantizationCode1
Vision Transformer Hashing for Image RetrievalCode1
Unbiased Single-scale and Multi-scale Quantizers for Distributed OptimizationCode1
HPTQ: Hardware-Friendly Post Training QuantizationCode1
Phrase Retrieval Learns Passage Retrieval, TooCode1
OMPQ: Orthogonal Mixed Precision QuantizationCode1
Fine-grained Data Distribution Alignment for Post-Training QuantizationCode1
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT CompressionCode1
Image Compression with Recurrent Neural Network and Generalized Divisive NormalizationCode1
Optimal Target Shape for LiDAR Pose EstimationCode1
Diverse Sample Generation: Pushing the Limit of Generative Data-free QuantizationCode1
Compact representations of convolutional neural networks via weight pruning and quantizationCode1
Dynamic Network Quantization for Efficient Video InferenceCode1
FOX-NAS: Fast, On-device and Explainable Neural Architecture SearchCode1
Generalizable Mixed-Precision Quantization via Attribution Rank PreservationCode1
Jointly Optimizing Query Encoder and Product Quantization to Improve Retrieval PerformanceCode1
Uniformity in Heterogeneity:Diving Deep into Count Interval Partition for Crowd CountingCode1
SimCC: a Simple Coordinate Classification Perspective for Human Pose EstimationCode1
BAGUA: Scaling up Distributed Learning with System RelaxationsCode1
Secure Quantized Training for Deep LearningCode1
APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU Tensor CoresCode1
VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice ConversionCode1
Task-driven Semantic Coding via Reinforcement LearningCode1
Transferable Sparse Adversarial AttackCode1
Linear-Time Self Attention with Codeword Histogram for Efficient RecommendationCode1
Post-Training Sparsity-Aware QuantizationCode1
Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile DevicesCode1
Anchor-based Plain Net for Mobile Image Super-ResolutionCode1
Continual Learning via Bit-Level Information PreservingCode1
Joint Learning of Deep Retrieval Model and Product Quantization based Embedding IndexCode1
Pareto-Optimal Quantized ResNet Is Mostly 4-bitCode1
Binarized Aggregated Network with Quantization: Flexible Deep Learning Deployment for CSI Feedback in Massive MIMO SystemCode1
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed TrainingCode1
Show:102550
← PrevPage 15 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified