SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 20262050 of 4925 papers

TitleStatusHype
FastICARL: Fast Incremental Classifier and Representation Learning with Efficient Budget Allocation in Audio Sensing Applications0
Communication-efficient k-Means for Edge-based Machine Learning0
Arbitrary Bit-width Network: A Joint Layer-Wise Quantization and Adaptive Inference Approach0
Guaranteed Quantization Error Computation for Neural Network Model Compression0
Faster Neural Net Inference via Forests of Sparse Oblique Decision Trees0
Faster Inference of Integer SWIN Transformer by Removing the GELU Activation0
Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks0
Gull: A Generative Multifunctional Audio Codec0
GWQ: Gradient-Aware Weight Quantization for Large Language Models0
Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks0
Fastening the Initial Access in 5G NR Sidelink for 6G V2X Networks0
HACK: Homomorphic Acceleration via Compression of the Key-Value Cache for Disaggregated LLM Inference0
Arabic Compact Language Modelling for Resource Limited Devices0
Hadamard Domain Training with Integers for Class Incremental Quantized Learning0
HadaNets: Flexible Quantization Strategies for Neural Networks0
HadaNorm: Diffusion Transformer Quantization through Mean-Centered Transformations0
HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis0
Additive Quantization for Extreme Vector Compression0
Acceleration for Compressed Gradient Descent in Distributed Optimization0
FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding0
LANA: Latency Aware Network Acceleration0
Fast DistilBERT on CPUs0
Communication-Efficient Federated Learning over Capacity-Limited Wireless Networks0
Communication-Efficient Federated Learning via Quantized Compressed Sensing0
AQUILA: Communication Efficient Federated Learning with Adaptive Quantization in Device Selection Strategy0
Show:102550
← PrevPage 82 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified