SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 501525 of 4925 papers

TitleStatusHype
NAPA-VQ: Neighborhood-Aware Prototype Augmentation with Vector Quantization for Continual LearningCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal HashingCode1
Network Quantization with Element-wise Gradient ScalingCode1
4-bit Shampoo for Memory-Efficient Network TrainingCode1
Exploring Quantization for Efficient Pre-Training of Transformer Language ModelsCode1
A Thorough Examination of Decoding Methods in the Era of LLMsCode1
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
AdANNS: A Framework for Adaptive Semantic SearchCode1
Neural Vector Fields: Implicit Representation by Explicit LearningCode1
AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm QuantizerCode1
Compact representations of convolutional neural networks via weight pruning and quantizationCode1
Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model MergingCode1
Hierarchical Quantized AutoencodersCode1
CommVQ: Commutative Vector Quantization for KV Cache CompressionCode1
Graph-less Neural Networks: Teaching Old MLPs New Tricks via DistillationCode1
Graph Convolutional Network for Recommendation with Low-pass Collaborative FiltersCode1
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teachingCode1
Communication-Efficient Adaptive Federated LearningCode1
Gradient-based Automatic Mixed Precision Quantization for Neural Networks On-ChipCode1
And the Bit Goes Down: Revisiting the Quantization of Neural NetworksCode1
Anchor-based Plain Net for Mobile Image Super-ResolutionCode1
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed TrainingCode1
An Automatic Graph Construction Framework based on Large Language Models for RecommendationCode1
Active Image IndexingCode1
Show:102550
← PrevPage 21 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified