SOTAVerified

Data-free Knowledge Distillation

Papers

Showing 2130 of 75 papers

TitleStatusHype
Data-free Knowledge Distillation for Fine-grained Visual CategorizationCode0
GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation0
De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts0
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge DistillationCode0
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge DistillationCode0
FedD2S: Personalized Data-Free Federated Knowledge Distillation0
Data-Free Distillation of Language Model by Text-to-Text Transfer0
Cross-feature Contrastive Loss for Decentralized Deep Learning on Heterogeneous DataCode0
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images0
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge DistillationCode1
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Accuracy91.7Unverified
2ZeroGen (T5-base)Accuracy88.5Unverified
3ProGen (T5-base)Accuracy85.9Unverified
4Prompt2Model (T5-base)Accuracy62.2Unverified
#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Exact Match75.2Unverified
2Prompt2Model (T5-base)Exact Match74.4Unverified
3ZeroGen (T5-base)Exact Match69.4Unverified
4ProGen (T5-base)Exact Match68.1Unverified