| MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition | Aug 11, 2022 | Data Augmentationimage-classification | CodeCode Available | 1 |
| Preservation of the Global Knowledge by Not-True Distillation in Federated Learning | Jun 6, 2021 | Continual LearningFederated Learning | CodeCode Available | 1 |
| Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation | Mar 15, 2021 | Data AugmentationKnowledge Distillation | CodeCode Available | 1 |
| Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation | Feb 25, 2021 | Knowledge DistillationSelf-Knowledge Distillation | CodeCode Available | 1 |
| Noisy Self-Knowledge Distillation for Text Summarization | Sep 15, 2020 | Knowledge DistillationSelf-Knowledge Distillation | CodeCode Available | 1 |
| Self-Knowledge Distillation with Progressive Refinement of Targets | Jun 22, 2020 | image-classificationImage Classification | CodeCode Available | 1 |
| ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks | May 7, 2020 | Knowledge DistillationSelf-Knowledge Distillation | CodeCode Available | 1 |
| Regularizing Class-wise Predictions via Self-knowledge Distillation | Mar 31, 2020 | image-classificationImage Classification | CodeCode Available | 1 |
| Tackling Data Heterogeneity in Federated Learning through Knowledge Distillation with Inequitable Aggregation | Jun 25, 2025 | Federated LearningKnowledge Distillation | CodeCode Available | 0 |
| MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation | Mar 26, 2025 | Knowledge DistillationMixture-of-Experts | —Unverified | 0 |