Quantization
Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).
Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers
Papers
Showing 1–10 of 4925 papers
All datasetsImageNetCIFAR-10Wiki-40BAgeDB-30CFP-FPCOCO (Common Objects in Context)IJB-BIJB-CKnowledge-based:LFW
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | SSD ResNet50 V1 FPN 640x640 | MAP | 34.3 | — | Unverified |