SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 42014250 of 4925 papers

TitleStatusHype
NUQSGD: Improved Communication Efficiency for Data-parallel SGD via Nonuniform QuantizationCode0
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural NetworksCode0
Learn to Compress CSI and Allocate Resources in Vehicular Networks0
Unsupervised Neural Quantization for Compressed-Domain Similarity SearchCode0
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations0
Primary quantization matrix estimation of double compressed JPEG images via CNNCode0
Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge0
GDRQ: Group-based Distribution Reshaping for Quantization0
U-Net Fixed-Point Quantization for Medical Image SegmentationCode0
Efficient computation of counterfactual explanations of LVQ modelsCode0
Deep Task-Based Quantization0
Central Similarity Quantization for Efficient Image and Video RetrievalCode0
Learn to Allocate Resources in Vehicular Networks0
DeepCABAC: A Universal Compression Algorithm for Deep Neural NetworksCode0
Robust and Communication-Efficient Collaborative LearningCode0
QRMODA and BRMODA: Novel Models for Face Recognition Accuracy in Computer Vision Systems with Adapted Video Streams0
Distributed Average Consensus under Quantized Communication via Event-Triggered Mass Splitting0
Exploring Semantic Segmentation on the DCT Representation0
Light Multi-segment Activation for Model CompressionCode0
An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis0
Learning Multimodal Fixed-Point Weights using Gradient Descent0
The Bach Doodle: Approachable music composition with machine learning at scale0
And the Bit Goes Down: Revisiting the Quantization of Neural NetworksCode1
A Targeted Acceleration and Compression Framework for Low bit Neural Networks0
Multi-Scale Vector Quantization with Reconstruction Trees0
Non-Structured DNN Weight Pruning -- Is It Beneficial in Any Platform?0
Don't take it lightly: Phasing optical random projections with unknown operatorsCode0
Deep Convolutional Compression for Massive MIMO CSI Feedback0
Compression of Acoustic Event Detection Models With Quantized Distillation0
Weight Normalization based Quantization for Deep Neural Network Compression0
BTEL: A Binary Tree Encoding Approach for Visual Localization0
Detection of small changes in medical and random-dot images comparing self-organizing map performance to human detection0
Gridless Multisnapshot Variational Line Spectral Estimation from Coarsely Quantized Samples0
Back to Simplicity: How to Train Accurate BNNs from Scratch?0
Deep Learning-Based Quantization of L-Values for Gray-Coded ModulationCode0
Quantized Three-Ion-Channel Neuron Model for Neural Action Potentials0
Deep Recurrent Quantization for Generating Sequential Binary CodesCode0
Beyond Product Quantization: Deep Progressive Quantization for Image RetrievalCode0
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks0
Parameterized Structured Pruning for Deep Neural Networks0
BasisConv: A method for compressed representation and learning in CNNs0
Data-Free Quantization Through Weight Equalization and Bias CorrectionCode1
Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference0
Fighting Quantization Bias With Bias0
Deep Spherical Quantization for Image Search0
Word-based Domain Adaptation for Neural Machine Translation0
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations0
Exploiting Offset-guided Network for Pose Estimation and Tracking0
Constructing Energy-efficient Mixed-precision Neural Networks through Principal Component Analysis for Edge IntelligenceCode0
Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model0
Show:102550
← PrevPage 85 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified