SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 110 of 216 papers

TitleStatusHype
Dataset Distillation with Neural Characteristic Function: A Minmax PerspectiveCode3
DD-Ranking: Rethinking the Evaluation of Dataset DistillationCode2
FedCache 2.0: Federated Edge Learning with Knowledge Caching and Dataset DistillationCode2
Self-supervised Dataset Distillation: A Good Compression Is All You NeedCode2
Dataset QuantizationCode2
Dataset Distillation by Matching Training TrajectoriesCode2
FADRM: Fast and Accurate Data Residual Matching for Dataset DistillationCode1
Dataset Distillation via Vision-Language Category PrototypeCode1
CaO_2: Rectifying Inconsistencies in Diffusion-Based Dataset DistillationCode1
Flowing Datasets with Wasserstein over Wasserstein Gradient FlowsCode1
Show:102550
← PrevPage 1 of 22Next →

No leaderboard results yet.