SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 6170 of 216 papers

TitleStatusHype
Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative LatentsCode1
Generalizing Dataset Distillation via Deep Generative PriorCode1
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero CostCode1
Efficient Dataset Distillation via Minimax DiffusionCode1
Dataset Factorization for CondensationCode1
On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation ParadigmCode1
Dataset Distillation with Infinitely Wide Convolutional NetworksCode0
Behaviour DistillationCode0
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep NetworksCode0
Show:102550
← PrevPage 7 of 22Next →

No leaderboard results yet.