SOTAVerified

Data-free Knowledge Distillation

Papers

Showing 125 of 75 papers

TitleStatusHype
Mosaic: Data-Free Knowledge Distillation via Mixture-of-Experts for Heterogeneous Distributed EnvironmentsCode0
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation0
Adversarial Curriculum Graph-Free Knowledge Distillation for Graph Neural Networks0
HFedCKD: Toward Robust Heterogeneous Federated Learning via Data-free Knowledge Distillation and Two-way Contrast0
Does Training with Synthetic Data Truly Protect Privacy?Code0
Hybrid Data-Free Knowledge DistillationCode0
Relation-Guided Adversarial Learning for Data-free Knowledge TransferCode1
Knowledge Migration Framework for Smart Contract Vulnerability Detection0
One-shot Federated Learning via Synthetic Distiller-Distillate CommunicationCode1
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data GenerationCode0
Towards Lifelong Few-Shot Customization of Text-to-Image Diffusion0
Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion AugmentationCode0
Leveraging Foundation Models for Efficient Federated Learning in Resource-restricted Edge Networks0
EchoDFKD: Data-Free Knowledge Distillation for Cardiac Ultrasound Segmentation using Synthetic DataCode1
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any ArchitectureCode1
Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual LearningCode1
Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification0
Small Scale Data-Free Knowledge DistillationCode1
Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to Probe the Boundaries of Stable Diffusion Generated Data0
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning0
Data-free Knowledge Distillation for Fine-grained Visual CategorizationCode0
GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation0
De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts0
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge DistillationCode0
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge DistillationCode0
Show:102550
← PrevPage 1 of 3Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Accuracy91.7Unverified
2ZeroGen (T5-base)Accuracy88.5Unverified
3ProGen (T5-base)Accuracy85.9Unverified
4Prompt2Model (T5-base)Accuracy62.2Unverified
#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Exact Match75.2Unverified
2Prompt2Model (T5-base)Exact Match74.4Unverified
3ZeroGen (T5-base)Exact Match69.4Unverified
4ProGen (T5-base)Exact Match68.1Unverified