SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 126150 of 216 papers

TitleStatusHype
Distilling Datasets Into Less Than One ImageCode1
Towards Adversarially Robust Dataset Distillation by Curvature RegularizationCode0
One Category One Prompt: Dataset Distillation using Diffusion Models0
Latent Dataset Distillation with Diffusion Models0
Distributional Dataset Distillation with Subtask DecompositionCode0
Improve Cross-Architecture Generalization on Dataset DistillationCode1
Group Distributionally Robust Dataset Distillation with Risk MinimizationCode1
Importance-Aware Adaptive Dataset DistillationCode0
D^4: Dataset Distillation via Disentangled Diffusion ModelCode1
MIM4DD: Mutual Information Maximization for Dataset DistillationCode0
Dataset Distillation via Adversarial Prediction MatchingCode0
Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical StudyCode0
On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation ParadigmCode1
Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative LatentsCode1
Dancing with Still Images: Video Distillation via Static-Dynamic DisentanglementCode1
Dataset Distillation via Curriculum Data Synthesis in Large Data EraCode1
Dataset Distillation via the Wasserstein Metric0
Discovering Galaxy Features via Dataset DistillationCode0
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective0
QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation0
Efficient Dataset Distillation via Minimax DiffusionCode1
Dataset Distillation in Latent Space0
Frequency Domain-based Dataset DistillationCode1
Embarassingly Simple Dataset DistillationCode1
Sequential Subset Matching for Dataset DistillationCode0
Show:102550
← PrevPage 6 of 9Next →

No leaderboard results yet.