| Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction | Feb 1, 2022 | Neural Network CompressionQuantization | CodeCode Available | 1 |
| CHIP: CHannel Independence-based Pruning for Compact Neural Networks | Oct 26, 2021 | Neural Network Compression | CodeCode Available | 1 |
| NeRV: Neural Representations for Videos | Oct 26, 2021 | DenoisingNeural Network Compression | CodeCode Available | 1 |
| Prune Your Model Before Distill It | Sep 30, 2021 | Knowledge Distillationmodel | CodeCode Available | 1 |
| Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better | Jun 16, 2021 | Deep LearningInformation Retrieval | CodeCode Available | 1 |
| Spectral Tensor Train Parameterization of Deep Learning Layers | Mar 7, 2021 | Deep Learningimage-classification | CodeCode Available | 1 |
| FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation | Feb 15, 2021 | Model CompressionNeural Network Compression | CodeCode Available | 1 |
| Robustness and Transferability of Universal Attacks on Compressed Models | Dec 10, 2020 | Neural Network CompressionQuantization | CodeCode Available | 1 |
| Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems | Nov 20, 2020 | Edge-computingimage-classification | CodeCode Available | 1 |
| T-Basis: a Compact Representation for Neural Networks | Jul 13, 2020 | Neural Network CompressionTensor Networks | CodeCode Available | 1 |