SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 131140 of 216 papers

TitleStatusHype
Generative Dataset Distillation using Min-Max Diffusion Model0
Dataset Distillation for Histopathology Image Classification0
Towards Stable and Storage-efficient Dataset Distillation: Matching Convexified Trajectory0
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation0
Heavy Labels Out! Dataset Distillation with Label Space Lightening0
Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation0
Hierarchical Features Matter: A Deep Exploration of Progressive Parameterization Method for Dataset Distillation0
Hyperbolic Dataset Distillation0
Image Dataset Compression Based on Matrix Product States0
A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption0
Show:102550
← PrevPage 14 of 22Next →

No leaderboard results yet.