SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 18511900 of 4925 papers

TitleStatusHype
Distributed Deep Reinforcement Learning Based Gradient Quantization for Federated Learning Enabled Vehicle Edge Computing0
Autoregressive Speech Synthesis without Vector Quantization0
Applying generative neural networks for fast simulations of the ALICE (CERN) experimentCode0
ERQ: Error Reduction for Post-Training Quantization of Vision Transformers0
Ternary Spike-based Neuromorphic Signal Processing System0
ZOBNN: Zero-Overhead Dependable Design of Binary Neural Networks with Deliberately Quantized Parameters0
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM CompressionCode0
Integer-only Quantized Transformers for Embedded FPGA-based Time-series Forecasting in AIoT0
Balance of Number of Embedding and their Dimensions in Vector Quantization0
Quantizing YOLOv7: A Comprehensive Study0
Hybrid Receiver Design for Massive MIMO-OFDM with Low-Resolution ADCs and Oversampling0
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models0
Resource-Efficient Speech Quality Prediction through Quantization Aware Training and Binary Activation MapsCode0
Low-latency machine learning FPGA accelerator for multi-qubit-state discrimination0
Joint Beamforming Design and Bit Allocation in Massive MIMO with Resolution-Adaptive ADCs0
Timestep-Aware Correction for Quantized Diffusion Models0
QET: Enhancing Quantized LLM Parameters and KV cache Compression through Element Substitution and Residual Clustering0
GPTQT: Quantize Large Language Models Twice to Push the Efficiency0
Fisher-aware Quantization for DETR Detectors with Critical-category Objectives0
ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers0
Codec-ASR: Training Performant Automatic Speech Recognition Systems with Discrete Speech Representations0
SFC: Achieve Accurate Fast Convolution under Low-precision Arithmetic0
Unified Anomaly Detection methods on Edge Device using Knowledge Distillation and Quantization0
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment0
Edge AI-Enabled Chicken Health Detection Based on Enhanced FCOS-Lite and Knowledge Distillation0
OSPC: Artificial VLM Features for Hateful Meme Detection0
How Does Quantization Affect Multilingual LLMs?0
Joint Pruning and Channel-wise Mixed-Precision Quantization for Efficient Deep Neural NetworksCode0
Exploring FPGA designs for MX and beyond0
Beyond Throughput and Compression Ratios: Towards High End-to-end Utility of Gradient Compression0
PQCache: Product Quantization-based KVCache for Long Context LLM Inference0
Linear and Nonlinear MMSE Estimation in One-Bit Quantized Systems under a Gaussian Mixture Prior0
NeuroNAS: Enhancing Efficiency of Neuromorphic In-Memory Computing for Intelligent Mobile Agents through Hardware-Aware Spiking Neural Architecture Search0
Toward a Diffusion-Based Generalist for Dense Vision Tasks0
Rateless Stochastic Coding for Delay-Constrained Semantic Communication0
Deep Fusion Model for Brain Tumor Classification Using Fine-Grained Gradient Preservation0
Reliable edge machine learning hardware for scientific applications0
Fronthaul Quantization-Aware MU-MIMO Precoding for Sum Rate Maximization0
Efficient course recommendations with T5-based ranking and summarizationCode0
MCNC: Manifold Constrained Network Compression0
OutlierTune: Efficient Channel-Wise Quantization for Large Language Models0
FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization0
A Quantization-based Technique for Privacy Preserving Distributed Learning0
Differential error feedback for communication-efficient decentralized learning0
CDQuant: Greedy Coordinate Descent for Accurate LLM Quantization0
Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-LevelsCode0
Reducing the Memory Footprint of 3D Gaussian Splatting0
Compensate Quantization Errors: Make Weights Hierarchical to Compensate Each Other0
Approximate DCT and Quantization Techniques for Energy-Constrained Image Sensors0
BitNet b1.58 Reloaded: State-of-the-art Performance Also on Smaller Networks0
Show:102550
← PrevPage 38 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified