SOTAVerified

Data-free Knowledge Distillation

Papers

Showing 2130 of 75 papers

TitleStatusHype
ZeroGen: Efficient Zero-shot Learning via Dataset GenerationCode1
Does Training with Synthetic Data Truly Protect Privacy?Code0
Mosaic: Data-Free Knowledge Distillation via Mixture-of-Experts for Heterogeneous Distributed EnvironmentsCode0
Data-free Knowledge Distillation for Segmentation using Data-Enriching GANCode0
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data GenerationCode0
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning StrategyCode0
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge DistillationCode0
Hybrid Data-Free Knowledge DistillationCode0
DAD++: Improved Data-free Test Time Adversarial DefenseCode0
Handling Data Heterogeneity in Federated Learning via Knowledge Distillation and FusionCode0
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Accuracy91.7Unverified
2ZeroGen (T5-base)Accuracy88.5Unverified
3ProGen (T5-base)Accuracy85.9Unverified
4Prompt2Model (T5-base)Accuracy62.2Unverified
#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Exact Match75.2Unverified
2Prompt2Model (T5-base)Exact Match74.4Unverified
3ZeroGen (T5-base)Exact Match69.4Unverified
4ProGen (T5-base)Exact Match68.1Unverified