SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 171180 of 216 papers

TitleStatusHype
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching0
Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset DistillationCode1
On the Size and Approximation Error of Distilled Sets0
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness0
A Survey on Dataset Distillation: Approaches, Applications and Future Directions0
Generalizing Dataset Distillation via Deep Generative PriorCode1
DiM: Distilling Dataset into Generative ModelCode1
Federated Virtual Learning on Heterogeneous Data with Local-global Distillation0
DREAM: Efficient Dataset Distillation by Representative MatchingCode1
Evaluating the effect of data augmentation and BALD heuristics on distillation of Semantic-KITTI dataset0
Show:102550
← PrevPage 18 of 22Next →

No leaderboard results yet.