SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 19011925 of 4925 papers

TitleStatusHype
FullPack: Full Vector Utilization for Sub-Byte Quantized Inference on General Purpose CPUs0
Full-Precision Free Binary Graph Neural Networks0
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms0
Fully Digital Second-order Level-crossing Sampling ADC for Data Saving in Sensing Sparse Signals0
Are disentangled representations all you need to build speaker anonymization systems?0
Homomorphic Encryption-Enabled Distance-Based Distributed Formation Control with Distance Mismatch Estimators0
A Deep Learning Inference Scheme Based on Pipelined Matrix Multiplication Acceleration Design and Non-uniform Quantization0
Accelerator-Aware Training for Transducer-Based Speech Recognition0
Fault-Tolerant Four-Dimensional Constellation for Coherent Optical Transmission Systems0
Functional Invariants to Watermark Large Transformers0
Functional quantization of rough volatility and applications to volatility derivatives0
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach0
Fundamental Trade-offs in Quantized Hybrid Radar Fusion: A CRB-Rate Perspective0
FunQuant: A R package to perform quantization in the context of rare events and time-consuming simulations0
FusionSAM: Latent Space driven Segment Anything Model for Multimodal Fusion and Segmentation0
Fuzzy-Based Dialectical Non-Supervised Image Classification and Clustering0
Fuzzy Norm-Explicit Product Quantization for Recommender Systems0
FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation0
Compact Neural Graphics Primitives with Learned Hash Probing0
FATNN: Fast and Accurate Ternary Neural Networks0
CompactifAI: Extreme Compression of Large Language Models using Quantum-Inspired Tensor Networks0
GANCompress: GAN-Enhanced Neural Image Compression with Binary Spherical Quantization0
Are Conventional SNNs Really Efficient? A Perspective from Network Quantization0
FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks0
Fast top-K Cosine Similarity Search through XOR-Friendly Binary Quantization on GPUs0
Show:102550
← PrevPage 77 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified