SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 101110 of 216 papers

TitleStatusHype
Deep Support Vectors0
Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillation0
Distilled One-Shot Federated Learning0
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
Distilling Long-tailed Datasets0
Distribution-aware Dataset Distillation for Efficient Image Restoration0
Diversity-Driven Generative Dataset Distillation Based on Diffusion Model with Self-Adaptive Memory0
Efficient Dataset Distillation via Diffusion-Driven Patch Selection for Improved Generalization0
Efficient Low-Resolution Face Recognition via Bridge Distillation0
Evaluating the effect of data augmentation and BALD heuristics on distillation of Semantic-KITTI dataset0
Show:102550
← PrevPage 11 of 22Next →

No leaderboard results yet.