SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 27012725 of 4925 papers

TitleStatusHype
Running Conventional Automatic Speech Recognition on Memristor Hardware: A Simulated Approach0
S3D: A Simple and Cost-Effective Self-Speculative Decoding Scheme for Low-Memory GPUs0
S3-Net: A Fast and Lightweight Video Scene Understanding Network by Single-shot Segmentation0
S4: a High-sparsity, High-performance AI Accelerator0
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference0
SaleNet: A low-power end-to-end CNN accelerator for sustained attention level evaluation using EEG0
Saliency Assisted Quantization for Neural Networks0
HDR image watermarking using saliency detection and quantization index modulation0
SAMP: A Model Inference Toolkit of Post-Training Quantization for Text Processing via Self-Adaptive Mixed-Precision0
Sampled-data control design for systems with quantized actuators0
Sampling From Autoencoders' Latent Space via Quantization And Probability Mass Function Concepts0
Sampling Streaming Data with Parallel Vector Quantization -- PVQ0
SAQ-SAM: Semantically-Aligned Quantization for Segment Anything Model0
Scalable and consistent embedding of probability measures into Hilbert spaces via measure quantization0
Scalable and Efficient Neural Speech Coding: A Hybrid Design0
Scalable Image Retrieval by Sparse Product Quantization0
Scalable Multivariate Fronthaul Quantization for Cell-Free Massive MIMO0
Scalable Nearest Neighbor Search based on kNN Graph0
Scalable Neural Network Compression and Pruning Using Hard Clustering and L1 Regularization0
Scalable Representation Learning for Multimodal Tabular Transactions0
Scalable Thermodynamic Second-order Optimization0
Scalar Arithmetic Multiple Data: Customizable Precision for Deep Neural Networks0
Scaled Quantization for the Vision Transformer0
Scaling FP8 training to trillion-token LLMs0
Scaling Language Model Size in Cross-Device Federated Learning0
Show:102550
← PrevPage 109 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified