SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 44014450 of 4925 papers

TitleStatusHype
Interest Point Detection based on Adaptive Ternary Coding0
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm0
Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks0
End-to-End Latent Fingerprint Search0
Precision Highway for Ultra Low-Precision Quantization0
Artificial neural networks condensation: A strategy to facilitate adaption of machine learning in medical settings by reducing computational burden0
Quicker ADC : Unlocking the hidden potential of Product Quantization with SIMDCode0
SQuantizer: Simultaneous Learning for Both Sparse and Low-precision Neural Networks0
Fast Adjustable Threshold For Uniform Neural Network Quantization (Winning solution of LPIRC-II)Code0
Efficient Super Resolution Using Binarized Neural Network0
Auto-tuning Neural Network Quantization Framework for Collaborative Inference Between the Cloud and Edge0
Deep neural networks algorithms for stochastic control problems on finite horizon: numerical applications0
Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer InterfacesCode0
E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs0
Deep neural networks algorithms for stochastic control problems on finite horizon: convergence analysis0
DNQ: Dynamic Network Quantization0
Prototype-based Neural Network Layers: Incorporating Vector Quantization0
MDU-Net: Multi-scale Densely Connected U-Net for biomedical image segmentation0
HitNet: Hybrid Ternary Recurrent Neural Network0
A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized Communication0
Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search0
Deep Signal Recovery with One-Bit Quantization0
Quantity over Quality: Dithered Quantization for Compressive Radar Systems0
Distributed dual vigilance fuzzy adaptive resonance theory learns online, retrieves arbitrarily-shaped clusters, and mitigates order dependenceCode0
On Periodic Functions as Regularizers for Quantization of Neural Networks0
Joint Neural Architecture Search and Quantization0
Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation0
QUENN: QUantization Engine for low-power Neural Networks0
Iteratively Training Look-Up Tables for Network Quantization0
Gaussian AutoEncoder0
GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training0
Fast High-Dimensional Bilateral and Nonlocal Means FilteringCode0
ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks0
Deep Multiple Description Coding by Learning Scalar Quantization0
A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM0
QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial AttacksCode0
One-Bit OFDM Receivers via Deep Learning0
Rethinking floating point for deep learningCode0
Online Embedding Compression for Text Classification using Low Rank Matrix Factorization0
Towards Highly Accurate and Stable Face Alignment for High-Resolution VideosCode0
Convolutional Neural Network Quantization using Generalized Gamma Distribution0
Non-linear Canonical Correlation Analysis: A Compressed Representation Approach0
Low-Precision Random Fourier Features for Memory-Constrained Kernel ApproximationCode0
DeepTwist: Learning Model Compression via Occasional Weight Distortion0
Low-complexity Recurrent Neural Network-based Polar Decoder with Weight Quantization Mechanism0
A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure0
Geometry and clustering with metrics derived from separable Bregman divergences0
From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference0
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference0
Differentiable Fine-grained Quantization for Deep Neural Network CompressionCode0
Show:102550
← PrevPage 89 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified