SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 91100 of 216 papers

TitleStatusHype
Discovering Galaxy Features via Dataset DistillationCode0
Dataset Distillation by Automatic Training TrajectoriesCode0
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset DistillationCode0
Exploring the Impact of Dataset Bias on Dataset DistillationCode0
Data-to-Model Distillation: Data-Efficient Learning FrameworkCode0
Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical StudyCode0
Enhancing Dataset Distillation via Non-Critical Region RefinementCode0
Exploring Generalized Gait Recognition: Reducing Redundancy and Noise within Indoor and Outdoor DatasetsCode0
Accelerating Dataset Distillation via Model AugmentationCode0
Dataset Distillers Are Good Label Denoisers In the WildCode0
Show:102550
← PrevPage 10 of 22Next →

No leaderboard results yet.