SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 181190 of 216 papers

TitleStatusHype
QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation0
Compressed Gastric Image Generation Based on Soft-Label Dataset Distillation for Medical Data Sharing0
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective0
Rethinking Data Distillation: Do Not Overlook Calibration0
UDD: Dataset Distillation via Mining Underutilized Regions0
Robust Dataset Distillation by Matching Adversarial Trajectories0
Robust Offline Reinforcement Learning for Non-Markovian Decision Processes0
Class-Imbalanced-Aware Adaptive Dataset Distillation for Scalable Pretrained Model on Credit Scoring0
Secure Federated Data Distillation0
Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator0
Show:102550
← PrevPage 19 of 22Next →

No leaderboard results yet.