SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 18511900 of 4925 papers

TitleStatusHype
Forward Link Analysis for Full-Duplex Cellular Networks with Low Resolution ADC/DAC0
Reinforcement Learning with Foundation Priors: Let the Embodied Agent Efficiently Learn on Its Own0
Compression of Generative Pre-trained Language Models via Quantization0
FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks0
BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization0
DQ-SGD: Dynamic Quantization in SGD for Communication-Efficient Distributed Learning0
A New Learning Method for Inference Accuracy, Core Occupation, and Performance Co-optimization on TrueNorth Chip0
DQ-Data2vec: Decoupling Quantization for Multilingual Speech Recognition0
FP8-BERT: Post-Training Quantization for Transformer0
BitNet b1.58 Reloaded: State-of-the-art Performance Also on Smaller Networks0
DQA: An Efficient Method for Deep Quantization of Deep Neural Network Activations0
FP8 versus INT8 for efficient deep learning inference0
A new heuristic algorithm for fast k-segmentation0
FPGA Resource-aware Structured Pruning for Real-Time Neural Networks0
Auditing Black-Box LLM APIs with a Rank-Based Uniformity Test0
FPRaker: A Processing Element For Accelerating Neural Network Training0
FPSAttention: Training-Aware FP8 and Sparsity Co-Design for Fast Video Diffusion0
FPTQ: Fine-grained Post-Training Quantization for Large Language Models0
FPTQuant: Function-Preserving Transforms for LLM Quantization0
FP=xINT:A Low-Bit Series Expansion Algorithm for Post-Training Quantization0
FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference0
On the Convergence of Differentially Private Federated Learning on Non-Lipschitz Objectives, and with Normalized Client Updates0
DP-Net: Dynamic Programming Guided Deep Neural Network Compression0
A "Network Pruning Network" Approach to Deep Model Compression0
Downlink MIMO Channel Estimation from Bits: Recoverability and Algorithm0
Bit-Mixer: Mixed-precision networks with runtime bit-width selection0
Frame Quantization of Neural Networks0
Free Bits: Latency Optimization of Mixed-Precision Quantized Neural Networks on the Edge0
An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC0
Frequency Autoregressive Image Generation with Continuous Tokens0
Frequency-Biased Synergistic Design for Image Compression and Compensation0
Frequency Disentangled Features in Neural Image Compression0
Downlink Clustering-Based Scheduling of IRS-Assisted Communications With Reconfiguration Constraints0
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks0
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network0
Double Quantization for Communication-Efficient Distributed Optimization0
From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference0
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs0
From Text to Source: Results in Detecting Large Language Model-Generated Content0
Double JPEG Detection in Mixed JPEG Quality Factors using Deep Convolutional Neural Network0
Fronthaul Compression and Passive Beamforming Design for Intelligent Reflecting Surface-aided Cloud Radio Access Networks0
Fronthaul-Constrained Distributed Radar Sensing0
Fronthaul Quantization-Aware MU-MIMO Precoding for Sum Rate Maximization0
FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary0
Bit Efficient Quantization for Deep Neural Networks0
FTL: A universal framework for training low-bit DNNs via Feature Transfer0
A blob method for inhomogeneous diffusion with applications to multi-agent control and sampling0
Full-Duplex Beyond Self-Interference: The Unlimited Sensing Way0
GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training0
GranQ: Granular Zero-Shot Quantization with Channel-Wise Activation Scaling in QAT0
Show:102550
← PrevPage 38 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified