| Forward and Backward Information Retention for Accurate Binary Neural Networks | Sep 24, 2019 | BinarizationNeural Network Compression | CodeCode Available | 0 |
| Dirichlet Pruning for Neural Network Compression | Nov 10, 2020 | Neural Network CompressionVariational Inference | CodeCode Available | 0 |
| Joint Matrix Decomposition for Deep Convolutional Neural Networks Compression | Jul 9, 2021 | Efficient Neural NetworkMatrix Factorization / Decomposition | CodeCode Available | 0 |
| Few Sample Knowledge Distillation for Efficient Network Compression | Dec 5, 2018 | Knowledge DistillationNetwork Pruning | CodeCode Available | 0 |
| Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models | Sep 26, 2024 | Neural Network CompressionQuantization | CodeCode Available | 0 |
| Focused Quantization for Sparse CNNs | Mar 7, 2019 | Model CompressionNeural Network Compression | CodeCode Available | 0 |
| Towards Compact CNNs via Collaborative Compression | May 24, 2021 | Neural Network CompressionTensor Decomposition | CodeCode Available | 0 |
| Learning Sparse Networks Using Targeted Dropout | May 31, 2019 | Network PruningNeural Network Compression | CodeCode Available | 0 |
| Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based Approach | Oct 14, 2019 | Neural Network CompressionQuantization | CodeCode Available | 0 |
| Differentiable Fine-grained Quantization for Deep Neural Network Compression | Oct 20, 2018 | Neural Network CompressionQuantization | CodeCode Available | 0 |