SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 181190 of 216 papers

TitleStatusHype
Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation for autonomous vehicles0
Dataset Distillation with Convexified Implicit GradientsCode1
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation0
Dataset Distillation: A Comprehensive Review0
A Comprehensive Survey of Dataset Distillation0
Backdoor Attacks Against Dataset DistillationCode1
Few-Shot Dataset Distillation via Translative Pre-Training0
Slimmable Dataset Condensation0
On Implicit Bias in Overparameterized Bilevel Optimization0
Accelerating Dataset Distillation via Model AugmentationCode0
Show:102550
← PrevPage 19 of 22Next →

No leaderboard results yet.