SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 5160 of 216 papers

TitleStatusHype
A Label is Worth a Thousand Images in Dataset DistillationCode1
GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero CostCode1
Dataset Quantization with Active Learning based Adaptive SamplingCode1
Low-Rank Similarity Mining for Multimodal Dataset DistillationCode1
Dataset Distillation via Committee VotingCode1
Dataset Distillation via FactorizationCode1
D^4M: Dataset Distillation via Disentangled Diffusion ModelCode1
A Large-Scale Study on Video Action Dataset CondensationCode1
Dataset Distillation via Vision-Language Category PrototypeCode1
Self-Supervised Dataset Distillation for Transfer LearningCode1
Show:102550
← PrevPage 6 of 22Next →

No leaderboard results yet.