| Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation? | May 22, 2023 | Data-free Knowledge DistillationFew-Shot Learning | CodeCode Available | 1 |
| Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint | Jan 1, 2023 | Data AugmentationData-free Knowledge Distillation | CodeCode Available | 1 |
| ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback | Oct 22, 2022 | Data-free Knowledge DistillationDataset Generation | CodeCode Available | 1 |
| Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning | Mar 17, 2022 | Data-free Knowledge DistillationFederated Learning | CodeCode Available | 1 |
| ZeroGen: Efficient Zero-shot Learning via Dataset Generation | Feb 16, 2022 | Data-free Knowledge DistillationDataset Generation | CodeCode Available | 1 |
| Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay | Jan 9, 2022 | Data-free Knowledge Distillationimage-classification | CodeCode Available | 1 |
| Up to 100 Faster Data-free Knowledge Distillation | Dec 12, 2021 | Data-free Knowledge DistillationKnowledge Distillation | CodeCode Available | 1 |
| Data-Free Knowledge Distillation for Heterogeneous Federated Learning | May 20, 2021 | Data-free Knowledge DistillationFederated Learning | CodeCode Available | 1 |
| Contrastive Model Inversion for Data-Free Knowledge Distillation | May 18, 2021 | Contrastive LearningData-free Knowledge Distillation | CodeCode Available | 1 |
| Training Generative Adversarial Networks in One Stage | Feb 28, 2021 | Data-free Knowledge DistillationImage Generation | CodeCode Available | 1 |