SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 33513400 of 4925 papers

TitleStatusHype
A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays0
Continuous Control with Action Quantization from Demonstrations0
PR-CIM: a Variation-Aware Binary-Neural-Network Framework for Process-Resilient Computation-in-memory0
Wideband and Entropy-Aware Deep Soft Bit QuantizationCode0
Low-Precision Quantization for Efficient Nearest Neighbor Search0
PTQ-SL: Exploring the Sub-layerwise Post-training Quantization0
Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization0
A Memory-Efficient Learning Framework for SymbolLevel Precoding with Quantized NN Weights0
Toward nonlinear dynamic control over encrypted data for infinite time horizon0
Memory-Efficient CNN Accelerator Based on Interlayer Feature Map Compression0
A comprehensive review of Binary Neural Network0
Are Words the Quanta of Human Language? Extending the Domain of Quantum Cognition0
A Deep Learning Inference Scheme Based on Pipelined Matrix Multiplication Acceleration Design and Non-uniform Quantization0
Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks0
Cognitive Coding of Speech0
Federated Learning via Plurality VoteCode0
Shifting Capsule Networks from the Cloud to the Deep EdgeCode0
Attention Augmented Convolutional Transformer for Tabular Time-series0
FedDQ: Communication-Efficient Federated Learning with Descending Quantization0
Pre-Quantized Deep Learning Models Codified in ONNX to Enable Hardware/Software Co-Design0
SDR: Efficient Neural Re-ranking using Succinct Document Representation0
Beyond Neighbourhood-Preserving Transformations for Quantization-Based Unsupervised Hashing0
Towards Efficient Post-training Quantization of Pre-trained Language Models0
Lidar Range Image Compression with Deep Delta Encoding0
Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning0
Beyond Quantization: Power aware neural networks0
Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization0
Succinct Compression: Near-Optimal and Lossless Compression of Deep Neural Networks during Inference Runtime0
Contrastive Mutual Information Maximization for Binary Neural Networks0
Contrastive Quant: Quantization Makes Stronger Contrastive Learning0
PIVQGAN: Posture and Identity Disentangled Image-to-Image Translation via Vector Quantization0
CSQ: Centered Symmetric Quantization for Extremely Low Bit Neural Networks0
Specialized Transformers: Faster, Smaller and more Accurate NLP Models0
Post-Training Quantization Is All You Need to Perform Cross-Platform Learned Image Compression0
Lattice Quantization0
Delving into Channels: Exploring Hyperparameter Space of Channel Bit Widths with Linear Complexity0
Differentiable Discrete Device-to-System Codesign for Optical Neural Networks via Gumbel-Softmax0
Riemannian Manifold Embeddings for Straight-Through Estimator0
Revisiting Locality-Sensitive Binary Codes from Random Fourier Features0
Efficient Point Transformer for Large-scale 3D Scene Understanding0
HoloFormer: Deep Compression of Pre-Trained Transforms via Unified Optimization of N:M Sparsity and Integer Quantization0
Faster Neural Net Inference via Forests of Sparse Oblique Decision Trees0
Wavelet Feature Maps Compression for Low Bandwidth Convolutional Neural Networks0
Quantized sparse PCA for neural network weight compression0
Full-Precision Free Binary Graph Neural Networks0
Click-through Rate Prediction with Auto-Quantized Contrastive Learning0
Performance Analysis of IRS-Assisted Cell-Free Communication0
Communication-Efficient Federated Linear and Deep Generalized Canonical Correlation AnalysisCode0
Distribution-sensitive Information Retention for Accurate Binary Neural Network0
Predicting Attention Sparsity in Transformers0
Show:102550
← PrevPage 68 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified