SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 27512800 of 4925 papers

TitleStatusHype
Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection0
Self-Adaptable Templates for Feature Coding0
Self-calibration for Language Model Quantization and Pruning0
Self-control: A Better Conditional Mechanism for Masked Autoregressive Model0
Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models0
Self-Supervised Consistent Quantization for Fully Unsupervised Image Retrieval0
Self-triggered Consensus of Multi-agent Systems with Quantized Relative State Measurements0
Semantic and Effective Communication for Remote Control Tasks with Dynamic Feature Compression0
Semantic Certainty Assessment in Vector Retrieval Systems: A Novel Framework for Embedding Quality Evaluation0
Semantic Residual for Multimodal Unified Discrete Representation0
Semantic Retention and Extreme Compression in LLMs: Can We Have Both?0
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers0
Semantic Text Compression for Classification0
Semi-Blind Post-Equalizer SINR Estimation and Dual CSI Feedback for Radar-Cellular Coexistence0
SEMINAR: Search Enhanced Multi-modal Interest Network and Approximate Retrieval for Lifelong Sequential Recommendation0
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization0
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bitwise Regularization0
Semi-supervised Vector-Quantization in Visual SLAM using HGCN0
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware0
Sensitivity-Aware Mixed-Precision Quantization and Width Optimization of Deep Neural Networks Through Cluster-Based Tree-Structured Parzen Estimation0
SensorChat: Answering Qualitative and Quantitative Questions during Long-Term Multimodal Sensor Interactions0
Sensor Selection and Distributed Quantization for Energy Efficiency in Massive MTC0
SEP-Nets: Small and Effective Pattern Networks0
SeRP: Self-Supervised Representation Learning Using Perturbed Point Clouds0
Service Delay Minimization for Federated Learning over Mobile Devices0
Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures0
Serving Large Language Models on Huawei CloudMatrix3840
Set-Theoretic Learning for Detection in Cell-Less C-RAN Systems0
SFC: Achieve Accurate Fast Convolution under Low-precision Arithmetic0
SGC-VQGAN: Towards Complex Scene Representation via Semantic Guided Clustering Codebook0
SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization0
SHACIRA: Scalable HAsh-grid Compression for Implicit Neural Representations0
Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions0
Shared Predictive Cross-Modal Deep Quantization0
SHARK: A Lightweight Model Compression Approach for Large-scale Recommender Systems0
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks0
Shining light on data: Geometric data analysis through quantum dynamics0
Shortlist Selection With Residual-Aware Distance Estimator for K-Nearest Neighbor Search0
Sigma-Delta and Distributed Noise-Shaping Quantization Methods for Random Fourier Features0
SignalNet: A Low Resolution Sinusoid Decomposition and Estimation Network0
Does Acceleration Cause Hidden Instability in Vision Language Models? Uncovering Instance-Level Divergence Through a Large-Scale Empirical Study0
MERCURY: Accelerating DNN Training By Exploiting Input Similarity0
Simple and Effective Unsupervised Redundancy Elimination to Compress Dense Vectors for Passage Retrieval0
Neural Speech Coding for Real-time Communications using Constant Bitrate Scalar Quantization0
Simple Augmentation Goes a Long Way: ADRL for DNN Quantization0
Simple strategies for recovering inner products from coarsely quantized random projections0
SimQ-NAS: Simultaneous Quantization Policy and Neural Architecture Search0
Simulated Annealing for JPEG Quantization0
Majority Kernels: An Approach to Leverage Big Model Dynamics for Efficient Small Model Training0
Simultaneous Compression and Quantization: A Joint Approach for Efficient Unsupervised Hashing0
Show:102550
← PrevPage 56 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified