SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 11261150 of 4925 papers

TitleStatusHype
Learning Frequency-Specific Quantization Scaling in VVC for Standard-Compliant Task-driven Image CodingCode0
LFZip: Lossy compression of multivariate floating-point time series data via improved predictionCode0
CAT: Compression-Aware Training for bandwidth reductionCode0
CASP: Compression of Large Multimodal Models Based on Attention SparsityCode0
An Overview of Arithmetic Adaptations for Inference of Convolutional Neural Networks on Re-configurable HardwareCode0
Cartesian K-MeansCode0
Adaptive Loss-aware Quantization for Multi-bit NetworksCode0
Learning Bag-of-Features Pooling for Deep Convolutional Neural NetworksCode0
Learning Accurate Performance Predictors for Ultrafast Automated Model CompressionCode0
Accelerated Nearest Neighbor Search with Quick ADCCode0
Learning Accurate Low-Bit Deep Neural Networks with Stochastic QuantizationCode0
Learning compact binary descriptors with unsupervised deep neural networksCode0
Langevin dynamics based algorithm e-THO POULA for stochastic optimization problems with discontinuous stochastic gradientCode0
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior ModelsCode0
Learned transform compression with optimized entropy encodingCode0
Learning Compression from Limited Unlabeled DataCode0
KP2Dtiny: Quantized Neural Keypoint Detection and Description on the EdgeCode0
Just Round: Quantized Observation Spaces Enable Memory Efficient Learning of Dynamic LocomotionCode0
JPEG Inspired Deep LearningCode0
Joint Maximum Purity Forest with Application to Image Super-ResolutionCode0
BRIDLE: Generalized Self-supervised Learning with QuantizationCode0
Joint Pruning and Channel-wise Mixed-Precision Quantization for Efficient Deep Neural NetworksCode0
KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM InferenceCode0
Forward and Backward Information Retention for Accurate Binary Neural NetworksCode0
Investigating the Impact of Quantization Methods on the Safety and Reliability of Large Language ModelsCode0
Show:102550
← PrevPage 46 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified