SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 801850 of 4925 papers

TitleStatusHype
HHF: Hashing-guided Hinge Function for Deep Hashing RetrievalCode1
Clustering the Sketch: A Novel Approach to Embedding Table CompressionCode1
Hierarchical Prior-based Super Resolution for Point Cloud Geometry CompressionCode1
Hierarchical Quantized AutoencodersCode1
Hierarchical Vector Quantized Graph Autoencoder with Annealing-Based Code SelectionCode1
Hierarchical Vector Quantization for Unsupervised Action SegmentationCode1
Algorithm-hardware Co-design for Deformable ConvolutionCode1
Mixed-Precision Neural Network Quantization via Learned Layer-wise ImportanceCode1
Continual Learning via Bit-Level Information PreservingCode1
HiHPQ: Hierarchical Hyperbolic Product Quantization for Unsupervised Image RetrievalCode1
Mitigating Adversarial Perturbations for Deep Reinforcement Learning via Vector QuantizationCode1
CNN-based first quantization estimation of double compressed JPEG imagesCode1
Mixed-precision Neural Networks on RISC-V Cores: ISA extensions for Multi-Pumped Soft SIMD OperationsCode1
Semi-Discrete Normalizing Flows through Differentiable TessellationCode1
Context-aware Communication for Multi-agent Reinforcement LearningCode1
Designing Large Foundation Models for Efficient Training and Inference: A SurveyCode1
Mind the Gap: A Practical Attack on GGUF QuantizationCode1
CoCoFL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and QuantizationCode1
MICSim: A Modular Simulator for Mixed-signal Compute-in-Memory based AI AcceleratorCode1
Confounding Tradeoffs for Neural Network QuantizationCode1
Codebook Features: Sparse and Discrete Interpretability for Neural NetworksCode1
MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank CompensatorsCode1
Mini-GPTs: Efficient Large Language Models through Contextual PruningCode1
COMQ: A Backpropagation-Free Algorithm for Post-Training QuantizationCode1
CondiQuant: Condition Number Based Low-Bit Quantization for Image Super-ResolutionCode1
Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic EnhancementCode1
Improving Neural Network Efficiency via Post-Training Quantization With Adaptive Floating-PointCode1
SLiM: One-shot Quantization and Sparsity with Low-rank Approximation for LLM Weight CompressionCode1
Improving Detail in Pluralistic Image Inpainting with Feature DequantizationCode1
Improvements to Target-Based 3D LiDAR to Camera CalibrationCode1
Compression with Bayesian Implicit Neural RepresentationsCode1
Inducing Systematicity in Transformers by Attending to Structurally Quantized EmbeddingsCode1
Conditional Coding and Variable Bitrate for Practical Learned Video CodingCode1
Joint Privacy Enhancement and Quantization in Federated LearningCode1
MicroScopiQ: Accelerating Foundational Models through Outlier-Aware Microscaling QuantizationCode1
Minimizing FLOPs to Learn Efficient Sparse RepresentationsCode1
AQD: Towards Accurate Fully-Quantized Object DetectionCode1
MOC-RVQ: Multilevel Codebook-Assisted Digital Generative Semantic CommunicationCode1
Mesa: A Memory-saving Training Framework for TransformersCode1
MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable QuantizationCode1
Compressing LLMs: The Truth is Rarely Pure and Never SimpleCode1
Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal HashingCode1
MELTing point: Mobile Evaluation of Language TransformersCode1
Compress Any Segment Anything Model (SAM)Code1
Communication-Efficient Adaptive Federated LearningCode1
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network QuantizationCode1
MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and QuantizationCode1
Compact representations of convolutional neural networks via weight pruning and quantizationCode1
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
A holistic approach to polyphonic music transcription with neural networksCode1
Show:102550
← PrevPage 17 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified