SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 41014150 of 4925 papers

TitleStatusHype
On Neural Architecture Search for Resource-Constrained Hardware Platforms0
SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization0
Exploiting Intelligent Reflecting Surfaces in NOMA Networks: Joint Beamforming Optimization0
Channel Estimation for MIMO Hybrid Architectures with Low Resolution ADCs for mmWave Communication0
Training DNN IoT Applications for Deployment On Analog NVM Crossbars0
Integrating PHY Security Into NDN-IoT Networks By Exploiting MEC: Authentication Efficiency, Robustness, and Accuracy Enhancement0
Noiseless Privacy0
Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning0
Secure Evaluation of Quantized Neural Networks0
Asynchronous Decentralized SGD with Quantized and Local Updates0
A holistic approach to polyphonic music transcription with neural networksCode1
CNN-based Analog CSI Feedback in FDD MIMO-OFDM Systems0
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning0
A Binary Variational Autoencoder for HashingCode0
Image processing in DNA0
Mirror Descent View for Neural Network QuantizationCode0
Fully Quantized Transformer for Machine Translation0
Reinforced Bit Allocation under Task-Driven Semantic Distortion Metrics0
Parametric context adaptive Laplace distribution for multimedia compression0
Variation-aware Binarized Memristive Networks0
Q8BERT: Quantized 8Bit BERTCode1
Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based ApproachCode0
OverQ: Opportunistic Outlier Quantization for Neural Network Accelerators0
QPyTorch: A Low-Precision Arithmetic Simulation FrameworkCode0
High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning0
Bit Efficient Quantization for Deep Neural Networks0
Improvements to Target-Based 3D LiDAR to Camera CalibrationCode1
REMIND Your Neural Network to Prevent Catastrophic ForgettingCode0
QuaRL: Quantization for Fast and Environmentally Sustainable Reinforcement LearningCode0
Hierarchical Encoding of Sequential Data With Compact and Sub-Linear Storage CostCode0
DSConv: Efficient Convolution Operator0
NGEMM: Optimizing GEMM for Deep Learning via Compiler-based Techniques0
Optimal Controller and Quantizer Selection for Partially Observable Linear-Quadratic-Gaussian Systems0
Automated design of error-resilient and hardware-efficient deep neural networks0
XNOR-Net++: Improved Binary Neural Networks0
REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs0
AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference0
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization0
Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural NetworksCode0
Optimized Quantization in Distributed Graph Signal Filtering0
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks0
GA-GAN: CT reconstruction from Biplanar DRRs using GAN with Guided Attention0
Optimizing Speech Recognition For The Edge0
Adaptive Binary-Ternary Quantization0
Prune or quantize? Strategy for Pareto-optimally low-cost and accurate CNN0
Goten: GPU-Outsourcing Trusted Execution of Neural Network Training and PredictionCode0
Monte Carlo Deep Neural Network Arithmetic0
On the Pareto Efficiency of Quantized CNN0
Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression0
Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights0
Show:102550
← PrevPage 83 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified