SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 11011125 of 4925 papers

TitleStatusHype
Distortion-Controlled Dithering with Reduced Recompression Rate0
Distributed Computation of Exact Average Degree and Network Size in Finite Number of Steps under Quantized Communication0
Distributed Optimization with Finite Bit Adaptive Quantization for Efficient Communication and Precision Enhancement0
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
Comprehensive Study on Performance Evaluation and Optimization of Model Compression: Bridging Traditional Deep Learning and Large Language Models0
A Robust and Low Complexity Deep Learning Model for Remote Sensing Image Classification0
ARM 4-BIT PQ: SIMD-based Acceleration for Approximate Nearest Neighbor Search on ARM0
ADMM Based Semi-Structured Pattern Pruning Framework For Transformer0
Discrete-Valued Neural Networks Using Variational Inference0
A Rigorous Analysis of Least Squares Sine Fitting Using Quantized Data: the Random Phase Case0
Discriminative Cross-View Binary Representation Learning0
AccLLM: Accelerating Long-Context LLM Inference Via Algorithm-Hardware Co-Design0
A Directed-Evolution Method for Sparsification and Compression of Neural Networks with Application to Object Identification and Segmentation and considerations of optimal quantization using small number of bits0
Discrete-Valued Neural Communication0
Disentangled Representation Learning for Unsupervised Neural Quantization0
Disentangling segmental and prosodic factors to non-native speech comprehensibility0
Composite Correlation Quantization for Efficient Multimodal Retrieval0
Composite Code Sparse Autoencoders for first stage retrieval0
Are Words the Quanta of Human Language? Extending the Domain of Quantum Cognition0
Component Training of Turbo Autoencoders0
CompMarkGS: Robust Watermarking for Compressed 3D Gaussian Splatting0
A Diffusion Model Based Quality Enhancement Method for HEVC Compressed Video0
3DQ: Compact Quantized Neural Networks for Volumetric Whole Brain Segmentation0
Completion Time Minimization of Fog-RAN-Assisted Federated Learning With Rate-Splitting Transmission0
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners0
Show:102550
← PrevPage 45 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified