SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 45014550 of 4925 papers

TitleStatusHype
Experimental results on palmvein-based personal recognition by multi-snapshot fusion of textural features0
Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks0
Exploiting Change Blindness for Video Coding: Perspectives from a Less Promising User Study0
Exploiting Intelligent Reflecting Surfaces in NOMA Networks: Joint Beamforming Optimization0
Exploiting Latent Properties to Optimize Neural Codecs0
Exploiting Modern Hardware for High-Dimensional Nearest Neighbor Search0
Exploiting Non-uniform Quantization for Enhanced ILC in Wideband Digital Pre-distortion0
Exploiting Offset-guided Network for Pose Estimation and Tracking0
Exploiting Weight Redundancy in CNNs: Beyond Pruning and Quantization0
Exploration of Activation Fault Reliability in Quantized Systolic Array-Based DNN Accelerators0
Explore Cross-Codec Quality-Rate Convex Hulls Relation for Adaptive Streaming0
Explore the Potential of CNN Low Bit Training0
Exploring Automatic Gym Workouts Recognition Locally On Wearable Resource-Constrained Devices0
Exploring Extreme Quantization in Spiking Language Models0
Exploring FPGA designs for MX and beyond0
Exploring Model Invariance with Discrete Search for Ultra-Low-Bit Quantization0
Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis0
Exploring Semantic Segmentation on the DCT Representation0
Exposing Hardware Building Blocks to Machine Learning Frameworks0
Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising0
Extreme Compression for Pre-trained Transformers Made Simple and Efficient0
Extreme Image Compression using Fine-tuned VQGANs0
Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM0
Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation0
Face recognition using color local binary pattern from mutually independent color channels0
Factorized Visual Tokenization and Generation0
FactorizeNet: Progressive Depth Factorization for Efficient Network Architecture Exploration Under Quantization Constraints0
False Detection (Positives and Negatives) in Object Detection0
FAMES: Fast Approximate Multiplier Substitution for Mixed-Precision Quantized DNNs--Down to 2 Bits!0
FantastIC4: A Hardware-Software Co-Design Approach for Efficiently Running 4bit-Compact Multilayer Perceptrons0
FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN Accelerators through Fault-Aware Quantization0
FAQS: Communication-efficient Federate DNN Architecture and Quantization Co-Search for personalized Hardware-aware Preferences0
Fast Autoregressive Models for Continuous Latent Generation0
Fast binary embeddings, and quantized compressed sensing with structured matrices0
Fast, Compact, and High Quality LSTM-RNN Based Statistical Parametric Speech Synthesizers for Mobile Devices0
Fast DistilBERT on CPUs0
FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding0
Fastening the Initial Access in 5G NR Sidelink for 6G V2X Networks0
Faster Inference of Integer SWIN Transformer by Removing the GELU Activation0
Faster Neural Net Inference via Forests of Sparse Oblique Decision Trees0
FastICARL: Fast Incremental Classifier and Representation Learning with Efficient Budget Allocation in Audio Sensing Applications0
Fast Implementation of 4-bit Convolutional Neural Networks for Mobile Devices0
Fast Inference of Tree Ensembles on ARM Devices0
Fast Jet Tagging with MLP-Mixers on FPGAs0
Fast Large-Scale Discrete Optimization Based on Principal Coordinate Descent0
Fast learning rates with heavy-tailed losses0
Fast Low-rank Representation based Spatial Pyramid Matching for Image Classification0
FastMamba: A High-Speed and Efficient Mamba Accelerator on FPGA with Accurate Quantization0
Fast on-line signature recognition based on VQ with time modeling0
Fast Orthogonal Projection Based on Kronecker Product0
Show:102550
← PrevPage 91 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified