SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 7180 of 216 papers

TitleStatusHype
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Enhancing Dataset Distillation via Label Inconsistency Elimination and Learning Pattern RefinementCode0
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated MatchingCode0
MetaDD: Boosting Dataset Distillation with Neural Network Architecture-Invariant Generalization0
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep NetworksCode0
Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight AdjustmentCode0
Dataset Distillation-based Hybrid Federated Learning on Non-IID Data0
Label-Augmented Dataset Distillation0
Efficient Low-Resolution Face Recognition via Bridge Distillation0
A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption0
Show:102550
← PrevPage 8 of 22Next →

No leaderboard results yet.