| Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation | Oct 19, 2021 | Knowledge DistillationNeural Network Compression | CodeCode Available | 0 | 5 |
| Improving Neural Network Quantization without Retraining using Outlier Channel Splitting | Jan 28, 2019 | Language ModelingLanguage Modelling | CodeCode Available | 0 | 5 |
| Teacher-Class Network: A Neural Network Compression Mechanism | Apr 7, 2020 | image-classificationImage Classification | CodeCode Available | 0 | 5 |
| Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters | Sep 30, 2018 | Neural Network CompressionQuantization | CodeCode Available | 0 | 5 |
| DP-Net: Dynamic Programming Guided Deep Neural Network Compression | Mar 21, 2020 | ClusteringNeural Network Compression | —Unverified | 0 | 0 |
| DKM: Differentiable K-Means Clustering Layer for Neural Network Compression | Aug 28, 2021 | ClusteringModel Compression | —Unverified | 0 | 0 |
| Distilling Pixel-Wise Feature Similarities for Semantic Segmentation | Oct 31, 2019 | Knowledge DistillationNeural Network Compression | —Unverified | 0 | 0 |
| An Overview of Neural Network Compression | Jun 5, 2020 | Knowledge DistillationModel Compression | —Unverified | 0 | 0 |
| Distilling Critical Paths in Convolutional Neural Networks | Oct 28, 2018 | Neural Network Compression | —Unverified | 0 | 0 |
| Differentiable Joint Pruning and Quantization for Hardware Efficiency | Jul 20, 2020 | Neural Network CompressionQuantization | —Unverified | 0 | 0 |