SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 39013950 of 4925 papers

TitleStatusHype
Choose Your Model Size: Any Compression by a Single Gradient Descent0
CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech0
CLAP-ART: Automated Audio Captioning with Semantic-rich Audio Representation Tokenizer0
Class-based Quantization for Neural Networks0
Classification Accuracy Improvement for Neuromorphic Computing Systems with One-level Precision Synapses0
Click-through Rate Prediction with Auto-Quantized Contrastive Learning0
CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization0
ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning0
Cluster-Based Cooperative Digital Over-the-Air Aggregation for Wireless Federated Edge Learning0
Clustering-Based Evolutionary Federated Multiobjective Optimization and Learning0
Clustering with Bregman Divergences: an Asymptotic Analysis0
Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss0
Cluster Pruning: An Efficient Filter Pruning Method for Edge AI Vision Applications0
Towards Feature Distribution Alignment and Diversity Enhancement for Data-Free Quantization0
Cluster Regularized Quantization for Deep Networks Compression0
CNN2Gate: Toward Designing a General Framework for Implementation of Convolutional Neural Networks on FPGA0
CNN Acceleration by Low-rank Approximation with Quantized Factors0
CNN-based Analog CSI Feedback in FDD MIMO-OFDM Systems0
CNN-Based Equalization for Communications: Achieving Gigabit Throughput with a Flexible FPGA Hardware Architecture0
CNN inference acceleration using dictionary of centroids0
COAP: Memory-Efficient Training with Correlation-Aware Gradient Projection0
CoAst: Validation-Free Contribution Assessment for Federated Learning based on Cross-Round Valuation0
Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference0
Codage \'echelonnable \`a granularit\'e fine de la parole : Application au codeur G.729 (Fine granularity scalable speech coding: Application to the G.729 coder) [in French]0
Codebook based Audio Feature Representation for Music Information Retrieval0
CodeBrain: Impute Any Brain MRI via Instance-specific Scalar-quantized Codes0
Codec-ASR: Training Performant Automatic Speech Recognition Systems with Discrete Speech Representations0
Co-Designing Binarized Transformer and Hardware Accelerator for Efficient End-to-End Edge Deployment0
Coding for Random Projections0
Coding for Random Projections and Approximate Near Neighbor Search0
CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation0
Cognitive Coding of Speech0
Cognitive Non-Coherent Jamming Techniques for Frequency Selective Attacks0
Collaborative Automotive Radar Sensing via Mixed-Precision Distributed Array Completion0
Collaborative Edge AI Inference over Cloud-RAN0
Collaborative Filtering with Smooth Reconstruction of the Preference Function0
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks0
Collaborative Quantization Embeddings for Intra-Subject Prostate MR Image Registration0
Collaborative Quantization for Cross-Modal Similarity Search0
Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks0
Post Training Quantization of Large Language Models with Microscaling Formats0
COMET: Towards Partical W4A4KV4 LLMs Serving0
Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique0
Communication Compression for Tensor Parallel LLM Inference0
Communication-Efficient Decentralized Multi-Agent Reinforcement Learning for Cooperative Adaptive Cruise Control0
Communication Efficient Distributed Learning with Censored, Quantized, and Generalized Group ADMM0
Communication-Efficient Federated Distillation0
Communication Efficient Federated Learning over Multiple Access Channels0
Communication-Efficient Federated Learning via Optimal Client Sampling0
Communication-Efficient Federated Learning via Quantized Compressed Sensing0
Show:102550
← PrevPage 79 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified