SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 38513900 of 4925 papers

TitleStatusHype
Brain Inspired Cortical Coding Method for Fast Clustering and Codebook Generation0
Brain-inspired reverse adversarial examples0
BrainStratify: Coarse-to-Fine Disentanglement of Intracranial Neural Dynamics0
Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model0
Breaking the Bias: Recalibrating the Attention of Industrial Anomaly Detection0
Breaking the Limits of Quantization-Aware Defenses: QADT-R for Robustness Against Patch-Based Adversarial Attacks in QNNs0
Breaking the waves: asymmetric random periodic features for low-bitrate kernel machines0
Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation0
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)0
Bridging the Gap between Continuous and Informative Discrete Representations by Random Product Quantization0
Bridging the Gap between Gaussian Diffusion Models and Universal Quantization for Image Compression0
Bridging the Modality Gap: Softly Discretizing Audio Representation for LLM-based Automatic Speech Recognition0
BRIEDGE: EEG-Adaptive Edge AI for Multi-Brain to Multi-Robot Interaction0
BRICS: Bi-level feature Representation of Image CollectionS0
BTEL: A Binary Tree Encoding Approach for Visual Localization0
Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers0
Bullion: A Column Store for Machine Learning0
Byzantine-Resilient Secure Federated Learning0
CA3D: Convolutional-Attentional 3D Nets for Efficient Video Activity Recognition on the Edge0
CacheQuant: Comprehensively Accelerated Diffusion Models0
Cactus Mechanisms: Optimal Differential Privacy Mechanisms in the Large-Composition Regime0
CALM: Co-evolution of Algorithms and Language Model for Automatic Heuristic Design0
CAMBI: Contrast-aware Multiscale Banding Index0
Cancer Subtyping via Embedded Unsupervised Learning on Transcriptomics Data0
Can General-Purpose Large Language Models Generalize to English-Thai Machine Translation ?0
Can Large Language Models Understand Context?0
Causal Speech Enhancement with Predicting Semantics based on Quantized Self-supervised Learning Features0
CBQ: Cross-Block Quantization for Large Language Models0
CDC: Classification Driven Compression for Bandwidth Efficient Edge-Cloud Collaborative Deep Learning0
CDQuant: Greedy Coordinate Descent for Accurate LLM Quantization0
CEG4N: Counter-Example Guided Neural Network Quantization Refinement0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
Cell growth rate dictates the onset of glass to fluid-like transition and long time super-diffusion in an evolving cell colony0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
Challenging GPU Dominance: When CPUs Outperform for On-Device LLM Inference0
Channel-Aware Constellation Design for Digital OTA Computation0
Channel Balancing for Accurate Quantization of Winograd Convolutions0
Channel Estimation for MIMO Hybrid Architectures with Low Resolution ADCs for mmWave Communication0
Channel Estimation in MIMO Systems with One-bit Spatial Sigma-delta ADCs0
Channel Pruning In Quantization-aware Training: An Adaptive Projection-gradient Descent-shrinkage-splitting Method0
Channel-wise Hessian Aware trace-Weighted Quantization of Neural Networks0
Channel-Wise Mixed-Precision Quantization for Large Language Models0
Characterising Bias in Compressed Models0
Characterization of the frequency response of channel-interleaved photonic ADCs based on the optical time-division demultiplexer0
Characterizing Coherent Integrated Photonic Neural Networks under Imperfections0
Characterizing the Accuracy -- Efficiency Trade-off of Low-rank Decomposition in Language Models0
Check-N-Run: A Checkpointing System for Training Deep Learning Recommendation Models0
Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge0
Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models0
CHIME: A Compressive Framework for Holistic Interest Modeling0
Show:102550
← PrevPage 78 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified