SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 181190 of 216 papers

TitleStatusHype
Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality0
FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural Networks0
Multi-Source Domain Adaptation meets Dataset Distillation through Dataset Dictionary Learning0
Towards Mitigating Architecture Overfitting on Distilled DatasetsCode0
Exploring Multilingual Text Data DistillationCode0
Rethinking Data Distillation: Do Not Overlook Calibration0
Dataset Distillation Meets Provable Subset Selection0
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching0
On the Size and Approximation Error of Distilled Sets0
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness0
Show:102550
← PrevPage 19 of 22Next →

No leaderboard results yet.