SOTAVerified

Data-free Knowledge Distillation

Papers

Showing 2130 of 75 papers

TitleStatusHype
Small Scale Data-Free Knowledge DistillationCode1
Dual Discriminator Adversarial Distillation for Data-free Model Compression0
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning0
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images0
Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification0
Conditional Generative Data-free Knowledge Distillation0
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning0
Data-Free Distillation of Language Model by Text-to-Text Transfer0
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation0
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier0
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Accuracy91.7Unverified
2ZeroGen (T5-base)Accuracy88.5Unverified
3ProGen (T5-base)Accuracy85.9Unverified
4Prompt2Model (T5-base)Accuracy62.2Unverified
#ModelMetricClaimedVerifiedStatus
1GOLD (T5-base)Exact Match75.2Unverified
2Prompt2Model (T5-base)Exact Match74.4Unverified
3ZeroGen (T5-base)Exact Match69.4Unverified
4ProGen (T5-base)Exact Match68.1Unverified