SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 17511800 of 4925 papers

TitleStatusHype
Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data0
Compressed Particle-Based Federated Bayesian Learning and Unlearning0
ARQ: A Mixed-Precision Quantization Framework for Accurate and Certifiably Robust DNNs0
Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition0
Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks0
Compressed Domain Prior-Guided Video Super-Resolution for Cloud Gaming Content0
Fighting Quantization Bias With Bias0
Fighting over-fitting with quantization for learning deep neural networks on noisy labels0
FGMP: Fine-Grained Mixed-Precision Weight and Activation Quantization for Hardware-Accelerated LLM Inference0
A Robust Visual Sampling Model Inspired by Receptive Field0
A Robust Deep Learning-Based Beamforming Design for RIS-assisted Multiuser MISO Communications with Practical Constraints0
AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs0
3D representation in 512-Byte:Variational tokenizer is the key for autoregressive 3D generation0
1-bit Localization Scheme for Radar using Dithered Quantized Compressed Sensing0
DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression0
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
Comprehensive Study on Performance Evaluation and Optimization of Model Compression: Bridging Traditional Deep Learning and Large Language Models0
A Robust and Low Complexity Deep Learning Model for Remote Sensing Image Classification0
Few-bit Quantization of Neural Networks for Nonlinearity Mitigation in a Fiber Transmission Experiment0
FETCH: A Memory-Efficient Replay Approach for Continual Learning in Image Classification0
ARM 4-BIT PQ: SIMD-based Acceleration for Approximate Nearest Neighbor Search on ARM0
ADMM Based Semi-Structured Pattern Pruning Framework For Transformer0
FedShift: Tackling Dual Heterogeneity Problem of Federated Learning via Weight Shift Aggregation0
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization0
A Rigorous Analysis of Least Squares Sine Fitting Using Quantized Data: the Random Phase Case0
FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization0
FedHQ: Hybrid Runtime Quantization for Federated Learning0
Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling0
A Directed-Evolution Method for Sparsification and Compression of Neural Networks with Application to Object Identification and Segmentation and considerations of optimal quantization using small number of bits0
AccLLM: Accelerating Long-Context LLM Inference Via Algorithm-Hardware Co-Design0
Federated Split Learning with Model Pruning and Gradient Quantization in Wireless Networks0
FedX: Adaptive Model Decomposition and Quantization for IoT Federated Learning0
Federated Split BERT for Heterogeneous Text Classification0
Composite Correlation Quantization for Efficient Multimodal Retrieval0
HAFLQ: Heterogeneous Adaptive Federated LoRA Fine-tuned LLM with Quantization0
Composite Code Sparse Autoencoders for first stage retrieval0
FewGAN: Generating from the Joint Distribution of a Few Images0
Are Words the Quanta of Human Language? Extending the Domain of Quantum Cognition0
Federated Learning With Quantized Global Model Updates0
FFN Fusion: Rethinking Sequential Computation in Large Language Models0
Federated Learning with Lossy Distributed Source Coding: Analysis and Optimization0
Component Training of Turbo Autoencoders0
Federated Learning: Strategies for Improving Communication Efficiency0
CompMarkGS: Robust Watermarking for Compressed 3D Gaussian Splatting0
A Diffusion Model Based Quality Enhancement Method for HEVC Compressed Video0
Federated Learning in Adversarial Settings0
Completion Time Minimization of Fog-RAN-Assisted Federated Learning With Rate-Splitting Transmission0
Federated Aggregation of Mallows Rankings: A Comparative Analysis of Borda and Lehmer Coding0
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners0
A Review of Recent Advances of Binary Neural Networks for Edge Computing0
Show:102550
← PrevPage 36 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified