SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 5175 of 216 papers

TitleStatusHype
A Label is Worth a Thousand Images in Dataset DistillationCode1
Efficient Dataset Distillation via Minimax DiffusionCode1
Towards Trustworthy Dataset DistillationCode1
Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative LatentsCode1
Frequency Domain-based Dataset DistillationCode1
Improve Cross-Architecture Generalization on Dataset DistillationCode1
D^4M: Dataset Distillation via Disentangled Diffusion ModelCode1
A Large-Scale Study on Video Action Dataset CondensationCode1
Dataset Distillation via Vision-Language Category PrototypeCode1
Dataset Distillation with Convexified Implicit GradientsCode1
Dataset DistillationCode1
Self-Supervised Dataset Distillation for Transfer LearningCode1
DataDAM: Efficient Dataset Distillation with Attention MatchingCode1
DiLM: Distilling Dataset into Language Model for Text-level Dataset DistillationCode1
Dataset Factorization for CondensationCode1
DELT: A Simple Diversity-driven EarlyLate Training for Dataset DistillationCode1
Dataset Distillation with Infinitely Wide Convolutional NetworksCode0
Behaviour DistillationCode0
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep NetworksCode0
Enhancing Dataset Distillation via Label Inconsistency Elimination and Learning Pattern RefinementCode0
Dataset Distillation via Adversarial Prediction MatchingCode0
Dataset Distillation Using Parameter PruningCode0
Curriculum Coarse-to-Fine Selection for High-IPC Dataset DistillationCode0
Dataset Distillation using Neural Feature RegressionCode0
Show:102550
← PrevPage 3 of 9Next →

No leaderboard results yet.