Aligned Structured Sparsity Learning for Efficient Image Super-Resolution Dec 1, 2021 Image Super-Resolution Knowledge Distillation
Code Code Available 15 Distilling Linguistic Context for Language Model Compression Sep 17, 2021 Knowledge Distillation Language Modeling
Code Code Available 15 Basic Binary Convolution Unit for Binarized Image Restoration Network Oct 2, 2022 Binarization Image Restoration
Code Code Available 15 A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness Jun 16, 2021 Data Augmentation Model Compression
Code Code Available 15 BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover's Distance Oct 13, 2020 Model Compression
Code Code Available 15 Communication-Efficient Federated Learning through Adaptive Weight Clustering and Server-Side Distillation Jan 25, 2024 Clustering Federated Learning
Code Code Available 15 BERT-of-Theseus: Compressing BERT by Progressive Module Replacing Feb 7, 2020 Knowledge Distillation Model Compression
Code Code Available 15 Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems Oct 1, 2019 Edge-computing Image Classification
Code Code Available 15 Dynamic DNNs and Runtime Management for Efficient Inference on Mobile/Embedded Devices Jan 17, 2024 Dynamic neural networks GPU
Code Code Available 15 Activation-Informed Merging of Large Language Models Feb 4, 2025 Computational Efficiency Continual Learning
Code Code Available 15 Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised Semantic Hashing Mar 10, 2024 Image Retrieval Knowledge Distillation
Code Code Available 15 CHEX: CHannel EXploration for CNN Model Compression Mar 29, 2022 image-classification Image Classification
Code Code Available 15 Communication-Efficient Diffusion Strategy for Performance Improvement of Federated Learning with Non-IID Data Jul 15, 2022 Federated Learning Model Compression
Code Code Available 15 Distilling Object Detectors with Feature Richness Nov 1, 2021 Knowledge Distillation Model Compression
Code Code Available 15 Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation Jul 3, 2021 Knowledge Distillation Model Compression
Code Code Available 15 Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning May 12, 2021 Clustering Federated Learning
Code Code Available 15 Class Attention Transfer Based Knowledge Distillation Apr 25, 2023 Knowledge Distillation Model Compression
Code Code Available 15 Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks Mar 25, 2022 Incremental Learning Knowledge Distillation
Code Code Available 15 CoA: Towards Real Image Dehazing via Compression-and-Adaptation Jan 1, 2025 Image Dehazing Model Compression
Code Code Available 15 Compacting, Picking and Growing for Unforgetting Continual Learning Oct 15, 2019 Age And Gender Classification Continual Learning
Code Code Available 15 Discrimination-aware Channel Pruning for Deep Neural Networks Oct 28, 2018 channel selection Model Compression
Code Code Available 15 A Unified Pruning Framework for Vision Transformers Nov 30, 2021 Model Compression object-detection
Code Code Available 15 Compression-Aware Video Super-Resolution Jan 1, 2023 Model Compression Super-Resolution
Code Code Available 15 Comprehensive Knowledge Distillation with Causal Intervention Dec 1, 2021 Causal Inference Knowledge Distillation
Code Code Available 15 Discrimination-aware Network Pruning for Deep Model Compression Jan 4, 2020 Face Recognition image-classification
Code Code Available 15