SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 201216 of 216 papers

TitleStatusHype
Exploring the potential of prototype-based soft-labels data distillation for imbalanced data classification0
Diversity-Driven Generative Dataset Distillation Based on Diffusion Model with Self-Adaptive Memory0
FairDD: Fair Dataset Distillation via Synchronized Matching0
Distribution-aware Dataset Distillation for Efficient Image Restoration0
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness0
FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural Networks0
FedWSIDD: Federated Whole Slide Image Classification via Dataset Distillation0
A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption0
Finding Stable Subnetworks at Initialization with Dataset Distillation0
Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillation0
Privacy-Preserving Federated Learning via Dataset Distillation0
Deep Support Vectors0
FocusDD: Real-World Scene Infusion for Robust Dataset Distillation0
DDFAD: Dataset Distillation Framework for Audio Data0
FYI: Flip Your Images for Dataset Distillation0
Show:102550
← PrevPage 5 of 5Next →

No leaderboard results yet.