SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 181190 of 216 papers

TitleStatusHype
Dataset Distillers Are Good Label Denoisers In the WildCode0
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated MatchingCode0
Dataset distillation for memorized data: Soft labels can leak held-out teacher knowledgeCode0
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
Enhancing Dataset Distillation via Non-Critical Region RefinementCode0
ATOM: Attention Mixer for Efficient Dataset DistillationCode0
Enhancing Dataset Distillation via Label Inconsistency Elimination and Learning Pattern RefinementCode0
Dataset Distillation with Infinitely Wide Convolutional NetworksCode0
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert TrajectoriesCode0
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep NetworksCode0
Show:102550
← PrevPage 19 of 22Next →

No leaderboard results yet.