SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 8190 of 216 papers

TitleStatusHype
Data-Efficient Generation for Dataset Distillation0
Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning0
UDD: Dataset Distillation via Mining Underutilized Regions0
Neural Spectral Decomposition for Dataset DistillationCode0
Distilling Long-tailed Datasets0
Not All Samples Should Be Utilized Equally: Towards Understanding and Improving Dataset Distillation0
Dataset Distillation for Histopathology Image Classification0
Generative Dataset Distillation Based on Diffusion ModelCode1
Heavy Labels Out! Dataset Distillation with Label Space Lightening0
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator0
Show:102550
← PrevPage 9 of 22Next →

No leaderboard results yet.