SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 27012750 of 4925 papers

TitleStatusHype
Design Methodology for Deep Out-of-Distribution Detectors in Real-Time Cyber-Physical SystemsCode1
CrAM: A Compression-Aware MinimizerCode1
Vector Quantized Image-to-Image Translation0
Adaptive Asymmetric Label-guided Hashing for Multimedia Search0
Reconciling Security and Communication Efficiency in Federated LearningCode1
Low-complexity CNNs for Acoustic Scene Classification0
HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation0
Convergence Theory of Generalized Distributed Subgradient Method with Random Quantization0
Quantized Sparse Weight Decomposition for Neural Network Compression0
Characterizing Coherent Integrated Photonic Neural Networks under Imperfections0
Auto-regressive Image Synthesis with Integrated Quantization0
CADyQ: Content-Aware Dynamic Quantization for Image Super-ResolutionCode1
Mixed-Precision Inference Quantization: Radically Towards Faster inference speed, Lower Storage requirement, and Lower Loss0
Quantized Training of Gradient Boosting Decision TreesCode6
Bitwidth-Adaptive Quantization-Aware Neural Network Training: A Meta-Learning ApproachCode1
Animation from Blur: Multi-modal Blur Decomposition with Motion GuidanceCode1
Green, Quantized Federated Learning over Wireless Networks: An Energy-Efficient Design0
Context Unaware Knowledge Distillation for Image RetrievalCode0
RepBNN: towards a precise Binary Neural Network with Enhanced Feature Map via RepeatingCode0
FewGAN: Generating from the Joint Distribution of a Few Images0
Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime0
Is Integer Arithmetic Enough for Deep Learning Training?0
Quantized Consensus under Data-Rate Constraints and DoS Attacks: A Zooming-In and Holding Approach0
Latent-Domain Predictive Neural Speech Coding0
Optimal Database Allocation in Finite Time with Efficient Communication and Transmission Stopping over Dynamic Networks0
CA-SpaceNet: Counterfactual Analysis for 6D Pose Estimation in SpaceCode1
S4: a High-sparsity, High-performance AI Accelerator0
Low-bit Shift Network for End-to-End Spoken Language Understanding0
Semi-supervised Vector-Quantization in Visual SLAM using HGCN0
T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit0
Lipschitz Continuity Retained Binary Neural NetworkCode0
Learning Representations for CSI Adaptive Quantization and Feedback0
Sub 8-Bit Quantization of Streaming Keyword Spotting Models for Embedded Chipsets0
Collaborative Quantization Embeddings for Intra-Subject Prostate MR Image Registration0
Hybrid Spatial-Temporal Entropy Modelling for Neural Video Compression0
DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment0
Synergistic Self-supervised and Quantization LearningCode1
Sparsifying Binary Networks0
CEG4N: Counter-Example Guided Neural Network Quantization Refinement0
Attention Round for Post-Training Quantization0
Cross-Scale Vector Quantization for Scalable Neural Speech Coding0
Network Binarization via Contrastive LearningCode1
BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation0
I-ViT: Integer-only Quantization for Efficient Vision Transformer InferenceCode2
Quantum Neural Network Compression0
Task-Oriented Sensing, Computation, and Communication Integration for Multi-Device Edge AI0
QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration0
On-Device Training Under 256KB MemoryCode2
Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition0
Compressing Pre-trained Transformers via Low-Bit NxM Sparsity for Natural Language Understanding0
Show:102550
← PrevPage 55 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified