SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 191200 of 216 papers

TitleStatusHype
A Survey on Dataset Distillation: Approaches, Applications and Future Directions0
Federated Virtual Learning on Heterogeneous Data with Local-global Distillation0
Evaluating the effect of data augmentation and BALD heuristics on distillation of Semantic-KITTI dataset0
Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation for autonomous vehicles0
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation0
Dataset Distillation: A Comprehensive Review0
A Comprehensive Survey of Dataset Distillation0
Slimmable Dataset Condensation0
Few-Shot Dataset Distillation via Translative Pre-Training0
On Implicit Bias in Overparameterized Bilevel Optimization0
Show:102550
← PrevPage 20 of 22Next →

No leaderboard results yet.