SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 2130 of 216 papers

TitleStatusHype
D^4M: Dataset Distillation via Disentangled Diffusion ModelCode1
Dataset Quantization with Active Learning based Adaptive SamplingCode1
A Label is Worth a Thousand Images in Dataset DistillationCode1
Low-Rank Similarity Mining for Multimodal Dataset DistillationCode1
What is Dataset Distillation Learning?Code1
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero CostCode1
Efficiency for Free: Ideal Data Are Transportable RepresentationsCode1
Exploiting Inter-sample and Inter-feature Relations in Dataset DistillationCode1
DiLM: Distilling Dataset into Language Model for Text-level Dataset DistillationCode1
Distilling Datasets Into Less Than One ImageCode1
Show:102550
← PrevPage 3 of 22Next →

No leaderboard results yet.