| Prune Your Model Before Distill It | Sep 30, 2021 | Knowledge Distillationmodel | CodeCode Available | 1 | 5 |
| Quantisation and Pruning for Neural Network Compression and Regularisation | Jan 14, 2020 | Network PruningNeural Network Compression | CodeCode Available | 1 | 5 |
| Learning Filter Basis for Convolutional Neural Network Compression | Aug 23, 2019 | General Classificationimage-classification | CodeCode Available | 1 | 5 |
| Robustness and Transferability of Universal Attacks on Compressed Models | Dec 10, 2020 | Neural Network CompressionQuantization | CodeCode Available | 1 | 5 |
| FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation | Feb 15, 2021 | Model CompressionNeural Network Compression | CodeCode Available | 1 | 5 |
| Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction | Feb 1, 2022 | Neural Network CompressionQuantization | CodeCode Available | 1 | 5 |
| Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems | Oct 1, 2019 | Edge-computingImage Classification | CodeCode Available | 1 | 5 |
| Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better | Jun 16, 2021 | Deep LearningInformation Retrieval | CodeCode Available | 1 | 5 |
| Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems | Nov 20, 2020 | Edge-computingimage-classification | CodeCode Available | 1 | 5 |
| PD-Quant: Post-Training Quantization based on Prediction Difference Metric | Dec 14, 2022 | Neural Network CompressionQuantization | CodeCode Available | 1 | 5 |