SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 44764500 of 4925 papers

TitleStatusHype
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization0
Error Feedback Approach for Quantization Noise Reduction of Distributed Graph Filters0
ERVQ: Enhanced Residual Vector Quantization with Intra-and-Inter-Codebook Optimization for Neural Audio Codecs0
eSampling: Energy Harvesting ADCs0
ESC-MVQ: End-to-End Semantic Communication With Multi-Codebook Vector Quantization0
ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA0
Estimating the Completeness of Discrete Speech Units0
Estimation and Quantization of Expected Persistence Diagrams0
EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models0
EuclidNets: Combining hardware and architecture design for Efficient Inference and Training0
Evaluating Post-Training Compression in GANs using Locality-Sensitive Hashing0
Evaluating the Practicality of Learned Image Compression0
Evaluation of Linear Implicit Quantized State System method for analyzing mission performance of power systems0
Evaluation of quality measures for color quantization0
Event-Based Bispectral Photometry Using Temporally Modulated Illumination0
Eventor: An Efficient Event-Based Monocular Multi-View Stereo Accelerator on FPGA Platform0
Event Retrieval in Large Video Collections with Circulant Temporal Encoding0
Distributed Inference with Sparse and Quantized Communication0
Event-Triggered Quantized Average Consensus via Mass Summation0
Exact Bias Correction and Covariance Estimation for Stereo Vision0
Exact Recovery of Sparse Binary Vectors from Generalized Linear Measurements0
Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing0
eXmY: A Data Type and Technique for Arbitrary Bit Precision Quantization0
Expand-and-Quantize: Unsupervised Semantic Segmentation Using High-Dimensional Space and Product Quantization0
Expectation maximization transfer learning and its application for bionic hand prostheses0
Show:102550
← PrevPage 180 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified