SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 8190 of 216 papers

TitleStatusHype
A Comprehensive Survey of Dataset Distillation0
Few-Shot Dataset Distillation via Translative Pre-Training0
Dataset Distillation for Quantum Neural Networks0
Compressed Gastric Image Generation Based on Soft-Label Dataset Distillation for Medical Data Sharing0
Diversity-Driven Generative Dataset Distillation Based on Diffusion Model with Self-Adaptive Memory0
Adaptive Dataset Quantization0
Distribution-aware Dataset Distillation for Efficient Image Restoration0
Dataset Distillation for Medical Dataset Sharing0
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness0
Efficient Low-Resolution Face Recognition via Bridge Distillation0
Show:102550
← PrevPage 9 of 22Next →

No leaderboard results yet.