SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 18511900 of 4925 papers

TitleStatusHype
A Different View of Sigma-Delta Modulators Under the Lens of Pulse Frequency Modulation0
Reinforcement Learning with Foundation Priors: Let the Embodied Agent Efficiently Learn on Its Own0
3DQ: Compact Quantized Neural Networks for Volumetric Whole Brain Segmentation0
FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks0
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models0
FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization0
Comparison of 14 different families of classification algorithms on 115 binary datasets0
Feature Quantization for Defending Against Distortion of Images0
FP8-BERT: Post-Training Quantization for Transformer0
Comparing Iterative and Least-Squares Based Phase Noise Tracking in Receivers with 1-bit Quantization and Oversampling0
High-performance deep spiking neural networks with 0.3 spikes per neuron0
FP8 versus INT8 for efficient deep learning inference0
Comparing Fisher Information Regularization with Distillation for DNN Quantization0
FPGA Resource-aware Structured Pruning for Real-Time Neural Networks0
Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data0
Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking0
FPSAttention: Training-Aware FP8 and Sparsity Co-Design for Fast Video Diffusion0
FPTQ: Fine-grained Post-Training Quantization for Large Language Models0
FPTQuant: Function-Preserving Transforms for LLM Quantization0
FP=xINT:A Low-Bit Series Expansion Algorithm for Post-Training Quantization0
ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers0
FD-LSCIC: Frequency Decomposition-based Learned Screen Content Image Compression0
FDD Massive MIMO: How to Optimally Combine UL Pilot and Limited DL CSI Feedback?0
FD Cell-Free mMIMO: Analysis and Optimization0
Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking0
FCN-Pose: A Pruned and Quantized CNN for Robot Pose Estimation for Constrained Devices0
Frame Quantization of Neural Networks0
Free Bits: Latency Optimization of Mixed-Precision Quantized Neural Networks on the Edge0
FBQuant: FeedBack Quantization for Large Language Models0
Frequency Autoregressive Image Generation with Continuous Tokens0
Frequency-Biased Synergistic Design for Image Compression and Compensation0
Frequency Disentangled Features in Neural Image Compression0
Compact Representation for Image Classification: To Choose or to Compress?0
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks0
FBI: Fingerprinting models with Benign Inputs0
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms0
Are disentangled representations all you need to build speaker anonymization systems?0
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs0
From Text to Source: Results in Detecting Large Language Model-Generated Content0
A Deep Learning Inference Scheme Based on Pipelined Matrix Multiplication Acceleration Design and Non-uniform Quantization0
Fronthaul Compression and Passive Beamforming Design for Intelligent Reflecting Surface-aided Cloud Radio Access Networks0
Fronthaul-Constrained Distributed Radar Sensing0
Fronthaul Quantization-Aware MU-MIMO Precoding for Sum Rate Maximization0
FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary0
Accelerator-Aware Training for Transducer-Based Speech Recognition0
FTL: A universal framework for training low-bit DNNs via Feature Transfer0
Fault-Tolerant Four-Dimensional Constellation for Coherent Optical Transmission Systems0
Compact Neural Graphics Primitives with Learned Hash Probing0
FATNN: Fast and Accurate Ternary Neural Networks0
CompactifAI: Extreme Compression of Large Language Models using Quantum-Inspired Tensor Networks0
Show:102550
← PrevPage 38 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified