SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 14011450 of 4925 papers

TitleStatusHype
Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection0
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks0
LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression ToolkitCode4
Ditto: Quantization-aware Secure Inference of Transformers upon MPCCode3
Custom Gradient Estimators are Straight-Through Estimators in Disguise0
QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM ServingCode4
KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization0
Compression-based Privacy Preservation for Distributed Nash Equilibrium Seeking in Aggregative Games0
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision TransformerCode0
DeltaKWS: A 65nm 36nJ/Decision Bio-inspired Temporal-Sparsity-Aware Digital Keyword Spotting IC with 0.6V Near-Threshold SRAM0
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMsCode1
Vector Quantization for Recommender Systems: A Review and OutlookCode1
PTQ4SAM: Post-Training Quantization for Segment AnythingCode2
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment0
Quantifying the Capabilities of LLMs across Scale and Precision0
Joint Discrete Precoding and RIS Optimization for RIS-Assisted MU-MIMO Communication Systems0
Efficient Text-driven Motion Generation via Latent Consistency TrainingCode0
Exploring Extreme Quantization in Spiking Language Models0
Lightweight Change Detection in Heterogeneous Remote Sensing Images with Online All-Integer Pruning Training0
Three Quantization Regimes for ReLU Networks0
Network reconstruction via the minimum description length principle0
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator DesignCode2
Efficient Compression of Multitask Multilingual Speech Models0
Deep Learning Models in Speech Recognition: Measuring GPU Energy Consumption, Impact of Noise and Model Quantization for Edge DeploymentCode0
Joint Sequential Fronthaul Quantization and Hardware Complexity Reduction in Uplink Cell-Free Massive MIMO Networks0
Wake Vision: A Tailored Dataset and Benchmark Suite for TinyML Computer Vision Applications0
Model Quantization and Hardware Acceleration for Vision Transformers: A Comprehensive SurveyCode2
When Quantization Affects Confidence of Large Language Models?Code0
Gradient-based Automatic Mixed Precision Quantization for Neural Networks On-ChipCode1
Investigating Automatic Scoring and Feedback using Large Language Models0
Self-supervised Pre-training of Text RecognizersCode0
Transition Rate Scheduling for Quantization-Aware Training0
Quantized Context Based LIF Neurons for Recurrent Spiking Neural Networks in 45nm0
Enhancing Channel Estimation in Quantized Systems with a Generative Prior0
sDAC -- Semantic Digital Analog Converter for Semantic Communications0
How to Parameterize Asymmetric Quantization Ranges for Quantization-Aware Training0
MMGRec: Multimodal Generative Recommendation with Transformer Model0
Semantic Routing for Enhanced Performance of LLM-Assisted Intent-Based 5G Core Network Management and OrchestrationCode7
CoST: Contrastive Quantization based Semantic Tokenization for Generative Recommendation0
CNN-Based Equalization for Communications: Achieving Gigabit Throughput with a Flexible FPGA Hardware Architecture0
AdaQAT: Adaptive Bit-Width Quantization-Aware Training0
Latency-Distortion Tradeoffs in Communicating Classification Results over Noisy Channels0
An empirical study of LLaMA3 quantization: from LLMs to MLLMsCode2
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of ExpertsCode3
FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization0
A SER-based Device Selection Mechanism in Multi-bits Quantization Federated Learning0
HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression0
MAexp: A Generic Platform for RL-based Multi-Agent ExplorationCode2
decoupleQ: Towards 2-bit Post-Training Uniform Quantization via decoupling Parameters into Integer and Floating PointsCode2
Privacy-Preserving UCB Decision Process Verification via zk-SNARKs0
Show:102550
← PrevPage 29 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified