SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 38513900 of 4925 papers

TitleStatusHype
Power-Efficient Sampling0
Power Measurement Enabled Channel Autocorrelation Matrix Estimation for IRS-Assisted Wireless Communication0
Power-of-Two (PoT) Weights in Large Language Models (LLMs)0
Power-of-Two Quantization for Low Bitwidth and Hardware Compliant Neural Networks0
PowerQuant: Automorphism Search for Non-Uniform Quantization0
PQCache: Product Quantization-based KVCache for Long Context LLM Inference0
PQD: Post-training Quantization for Efficient Diffusion Models0
PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation0
PQTable: Fast Exact Asymmetric Distance Neighbor Search for Product Quantization Using Hash Tables0
PQTable: Non-exhaustive Fast Search for Product-quantized Codes using Hash Tables0
Practical cognitive speech compression0
Practical Data-Dependent Metric Compression with Provable Guarantees0
Practical Locally Private Federated Learning with Communication Efficiency0
Practical Modulo Sampling: Mitigating High-Frequency Components0
PR-CIM: a Variation-Aware Binary-Neural-Network Framework for Process-Resilient Computation-in-memory0
Precipitation Nowcasting Using Physics Informed Discriminator Generative Models0
Precision and Recall Reject Curves for Classification0
Precision-aware Latency and Energy Balancing on Multi-Accelerator Platforms for DNN Inference0
Precision Enhancement of 3D Surfaces from Multiple Compressed Depth Maps0
Precision Highway for Ultra Low-Precision Quantization0
Precision Neural Network Quantization via Learnable Adaptive Modules0
Precision Where It Matters: A Novel Spike Aware Mixed-Precision Quantization Strategy for LLaMA-based Language Models0
Precoding Design for Limited-Feedback MISO Systems via Character-Polynomial Codes0
Predicting Attention Sparsity in Transformers0
Predicting Attention Sparsity in Transformers0
Predicting Generalization in Deep Learning via Local Measures of Distortion0
Predicting Multi-Codebook Vector Quantization Indexes for Knowledge Distillation0
Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE0
Latent-Domain Predictive Neural Speech Coding0
Predictive Uncertainty through Quantization0
PredToken: Predicting Unknown Tokens and Beyond with Coarse-to-Fine Iterative Decoding0
Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization0
Preliminary study on using vector quantization latent spaces for TTS/VC systems with consistent performance0
Preprocessing Enhanced Image Compression for Machine Vision0
PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models0
Pre-Quantized Deep Learning Models Codified in ONNX to Enable Hardware/Software Co-Design0
Privacy-Preserving Orthogonal Aggregation for Guaranteeing Gender Fairness in Federated Recommendation0
Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in Healthcare0
Privacy-Preserving Speech Representation Learning using Vector Quantization0
Privacy-Preserving UCB Decision Process Verification via zk-SNARKs0
Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption0
Prive-HD: Privacy-Preserved Hyperdimensional Computing0
PrivQuant: Communication-Efficient Private Inference with Quantized Network/Protocol Co-Optimization0
Probabilistically Sampled and Spectrally Clustered Plant Genotypes using Phenotypic Characteristics0
Probabilistic Learning Vector Quantization on Manifold of Symmetric Positive Definite Matrices0
Product Quantization Network for Fast Image Retrieval0
Product Quantizer Aware Inverted Index for Scalable Nearest Neighbor Search0
Product Split Trees0
ProFe: Communication-Efficient Decentralized Federated Learning via Distillation and Prototypes0
Progressive Compression with Universally Quantized Diffusion Models0
Show:102550
← PrevPage 78 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified