SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 101110 of 216 papers

TitleStatusHype
On Learning Representations for Tabular Data Distillation0
Class-Imbalanced-Aware Adaptive Dataset Distillation for Scalable Pretrained Model on Credit Scoring0
Dataset Distillation as Pushforward Optimal Quantization0
FocusDD: Real-World Scene Infusion for Robust Dataset Distillation0
Generative Dataset Distillation Based on Self-knowledge Distillation0
Hierarchical Features Matter: A Deep Exploration of Progressive Parameterization Method for Dataset Distillation0
OPTICAL: Leveraging Optimal Transport for Contribution Allocation in Dataset Distillation0
Towards Universal Dataset Distillation via Task-Driven Diffusion0
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
Adaptive Dataset Quantization0
Show:102550
← PrevPage 11 of 22Next →

No leaderboard results yet.