SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 35513600 of 4925 papers

TitleStatusHype
Computational data analysis for first quantization estimation on JPEG double compressed imagesCode0
Quantization optimized with respect to the Haar basis0
Who's a Good Boy? Reinforcing Canine Behavior in Real-Time using Machine LearningCode0
Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks0
I-BERT: Integer-only BERT QuantizationCode2
Improving Low-Precision Network Quantization via Bin Regularization0
Uniformity in Heterogeneity: Diving Deep Into Count Interval Partition for Crowd CountingCode1
RangeDet: In Defense of Range View for LiDAR-Based 3D Object DetectionCode1
Product Quantizer Aware Inverted Index for Scalable Nearest Neighbor Search0
Improving Neural Network Efficiency via Post-Training Quantization With Adaptive Floating-PointCode1
Practical Locally Private Federated Learning with Communication Efficiency0
Explore the Potential of CNN Low Bit Training0
Incremental few-shot learning via vector quantization in deep embedded space0
Post-Training Weighted Quantization of Neural Networks for Language Models0
WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic0
Multi-Prize Lottery Ticket Hypothesis: Finding Generalizable and Efficient Binary Subnetworks in a Randomly Weighted Neural Network0
Uniform-Precision Neural Network Quantization via Neural Channel Expansion0
TwinDNN: A Tale of Two Deep Neural Networks0
Weights Having Stable Signs Are Important: Finding Primary Subnetworks and Kernels to Compress Binary Weight Networks0
Improving the accuracy of neural networks in analog computing-in-memory systems by a generalized quantization method0
End-to-end Quantized Training via Log-Barrier Extensions0
WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATIONCode0
Simple Augmentation Goes a Long Way: ADRL for DNN Quantization0
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bitwise Regularization0
Hybrid and Non-Uniform DNN quantization methods using Retro Synthesis data for efficient inference0
DQSGD: DYNAMIC QUANTIZED STOCHASTIC GRADIENT DESCENT FOR COMMUNICATION-EFFICIENT DISTRIBUTED LEARNING0
Collaborative Filtering with Smooth Reconstruction of the Preference Function0
Learned Multi-Resolution Variable-Rate Image Compression with Octave-based Residual Blocks0
BinaryBERT: Pushing the Limit of BERT Quantization0
A Memory Efficient Baseline for Open Domain Question AnsweringCode1
Improving Adversarial Robustness in Weight-quantized Neural Networks0
Hybrid and Non-Uniform quantization methods using retro synthesis data for efficient inference0
Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks0
Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal HashingCode1
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN TrainingCode1
EQ-Net: A Unified Deep Learning Framework for Log-Likelihood Ratio Estimation and Quantization0
Energy Efficient Federated Learning over Heterogeneous Mobile Devices via Joint Design of Weight Quantization and Wireless Transmission0
DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution NetworksCode1
Study of Energy-Efficient Distributed RLS-based Learning with Coarsely Quantized Signals0
One-Bit Target Detection in Collocated MIMO Radar and Performance Degradation Analysis0
Resource-efficient DNNs for Keyword Spotting using Neural Architecture Search and QuantizationCode0
FantastIC4: A Hardware-Software Co-Design Approach for Efficiently Running 4bit-Compact Multilayer Perceptrons0
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning0
Efficient CNN-LSTM based Image Captioning using Neural Network CompressionCode0
CosSGD: Communication-Efficient Federated Learning with a Simple Cosine-Based Quantization0
Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis0
Scalable Verification of Quantized Neural Networks (Technical Report)Code0
Robust Downlink Transmit Optimization under Quantized Channel Feedback via the Strong Duality for QCQP0
Quantizing data for distributed learning0
Predicting Generalization in Deep Learning via Local Measures of Distortion0
Show:102550
← PrevPage 72 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified