SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 111120 of 216 papers

TitleStatusHype
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero CostCode1
FedCache 2.0: Federated Edge Learning with Knowledge Caching and Dataset DistillationCode2
Curriculum Dataset Distillation0
ATOM: Attention Mixer for Efficient Dataset DistillationCode0
Practical Dataset Distillation Based on Deep Support Vectors0
Let's Focus: Focused Backdoor Attack against Federated Transfer Learning0
Generative Dataset Distillation: Balancing Global Structure and Local Details0
Self-supervised Dataset Distillation: A Good Compression Is All You NeedCode2
Exploiting Inter-sample and Inter-feature Relations in Dataset DistillationCode1
DiLM: Distilling Dataset into Language Model for Text-level Dataset DistillationCode1
Show:102550
← PrevPage 12 of 22Next →

No leaderboard results yet.