SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 24512500 of 4925 papers

TitleStatusHype
Cactus Mechanisms: Optimal Differential Privacy Mechanisms in the Large-Composition Regime0
CALM: Co-evolution of Algorithms and Language Model for Automatic Heuristic Design0
CAMBI: Contrast-aware Multiscale Banding Index0
Cancer Subtyping via Embedded Unsupervised Learning on Transcriptomics Data0
Can General-Purpose Large Language Models Generalize to English-Thai Machine Translation ?0
Can Large Language Models Understand Context?0
Causal Speech Enhancement with Predicting Semantics based on Quantized Self-supervised Learning Features0
CBQ: Cross-Block Quantization for Large Language Models0
CDC: Classification Driven Compression for Bandwidth Efficient Edge-Cloud Collaborative Deep Learning0
CDQuant: Greedy Coordinate Descent for Accurate LLM Quantization0
CEG4N: Counter-Example Guided Neural Network Quantization Refinement0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
Cell growth rate dictates the onset of glass to fluid-like transition and long time super-diffusion in an evolving cell colony0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
Challenging GPU Dominance: When CPUs Outperform for On-Device LLM Inference0
Channel-Aware Constellation Design for Digital OTA Computation0
Channel Balancing for Accurate Quantization of Winograd Convolutions0
Channel Estimation for MIMO Hybrid Architectures with Low Resolution ADCs for mmWave Communication0
Channel Estimation in MIMO Systems with One-bit Spatial Sigma-delta ADCs0
Channel Pruning In Quantization-aware Training: An Adaptive Projection-gradient Descent-shrinkage-splitting Method0
Channel-wise Hessian Aware trace-Weighted Quantization of Neural Networks0
Channel-Wise Mixed-Precision Quantization for Large Language Models0
Characterising Bias in Compressed Models0
Characterization of the frequency response of channel-interleaved photonic ADCs based on the optical time-division demultiplexer0
Characterizing Coherent Integrated Photonic Neural Networks under Imperfections0
Characterizing the Accuracy -- Efficiency Trade-off of Low-rank Decomposition in Language Models0
Check-N-Run: A Checkpointing System for Training Deep Learning Recommendation Models0
Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge0
Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models0
CHIME: A Compressive Framework for Holistic Interest Modeling0
Choose Your Model Size: Any Compression by a Single Gradient Descent0
CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech0
CLAP-ART: Automated Audio Captioning with Semantic-rich Audio Representation Tokenizer0
Class-based Quantization for Neural Networks0
Classification Accuracy Improvement for Neuromorphic Computing Systems with One-level Precision Synapses0
Click-through Rate Prediction with Auto-Quantized Contrastive Learning0
CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization0
ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning0
Cluster-Based Cooperative Digital Over-the-Air Aggregation for Wireless Federated Edge Learning0
Clustering-Based Evolutionary Federated Multiobjective Optimization and Learning0
Clustering with Bregman Divergences: an Asymptotic Analysis0
Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss0
Cluster Pruning: An Efficient Filter Pruning Method for Edge AI Vision Applications0
Towards Feature Distribution Alignment and Diversity Enhancement for Data-Free Quantization0
Cluster Regularized Quantization for Deep Networks Compression0
CNN2Gate: Toward Designing a General Framework for Implementation of Convolutional Neural Networks on FPGA0
CNN Acceleration by Low-rank Approximation with Quantized Factors0
CNN-based Analog CSI Feedback in FDD MIMO-OFDM Systems0
CNN-Based Equalization for Communications: Achieving Gigabit Throughput with a Flexible FPGA Hardware Architecture0
CNN inference acceleration using dictionary of centroids0
Show:102550
← PrevPage 50 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified