SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 110 of 4925 papers

TitleStatusHype
Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate RepresentationCode0
An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC0
Angle Estimation of a Single Source with Massive Uniform Circular Arrays0
Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine0
Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications0
MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group QuantizationCode2
Lightweight Federated Learning over Wireless Edge Networks0
Compress Any Segment Anything Model (SAM)Code1
Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Image Generation0
MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group QuantizationCode2
Show:102550
← PrevPage 1 of 493Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified