SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 201216 of 216 papers

TitleStatusHype
Federated Learning via Decentralized Dataset Distillation in Resource-Constrained Edge EnvironmentsCode1
Remember the Past: Distilling Datasets into Addressable Memories for Neural NetworksCode1
Dataset Distillation using Neural Feature RegressionCode0
Primitive3D: 3D Object Dataset Synthesis from Randomly Assembled Primitives0
Dataset Distillation by Matching Training TrajectoriesCode2
LiDAR dataset distillation within bayesian active learning framework: Understanding the effect of data augmentation0
Image Dataset Compression Based on Matrix Product States0
Dataset Distillation with Infinitely Wide Convolutional NetworksCode0
Dataset Meta-Learning from Kernel-Ridge Regression0
PCPs: Patient Cardiac Prototypes0
Dataset Meta-Learning from Kernel Ridge-Regression0
Distilled One-Shot Federated Learning0
Flexible Dataset Distillation: Learn Labels Instead of ImagesCode1
Omni-supervised Facial Expression Recognition via Distilled Data0
Soft-Label Dataset Distillation and Text Dataset DistillationCode1
Dataset DistillationCode1
Show:102550
← PrevPage 5 of 5Next →

No leaderboard results yet.