SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 91100 of 216 papers

TitleStatusHype
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness0
FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural Networks0
Dataset Distillation for Histopathology Image Classification0
Distilling Long-tailed Datasets0
Class-Imbalanced-Aware Adaptive Dataset Distillation for Scalable Pretrained Model on Credit Scoring0
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation0
Exploring the potential of prototype-based soft-labels data distillation for imbalanced data classification0
Distilled One-Shot Federated Learning0
A Survey on Dataset Distillation: Approaches, Applications and Future Directions0
Show:102550
← PrevPage 10 of 22Next →

No leaderboard results yet.