SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 201216 of 216 papers

TitleStatusHype
Distilling Long-tailed Datasets0
A Comprehensive Survey of Dataset Distillation0
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
Distilled One-Shot Federated Learning0
Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillation0
The Curse of Unrolling: Rate of Differentiating Through Optimization0
Deep Support Vectors0
Efficient Dataset Distillation via Diffusion-Driven Patch Selection for Improved Generalization0
DDFAD: Dataset Distillation Framework for Audio Data0
Efficient Low-Resolution Face Recognition via Bridge Distillation0
Dataset Meta-Learning from Kernel Ridge-Regression0
Dataset Meta-Learning from Kernel-Ridge Regression0
The Evolution of Dataset Distillation: Toward Scalable and Generalizable Solutions0
Video Dataset Condensation with Diffusion Models0
Evaluating the effect of data augmentation and BALD heuristics on distillation of Semantic-KITTI dataset0
Dataset Distillation with Probabilistic Latent Features0
Show:102550
← PrevPage 9 of 9Next →

No leaderboard results yet.