SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 5175 of 216 papers

TitleStatusHype
OPTICAL: Leveraging Optimal Transport for Contribution Allocation in Dataset Distillation0
Towards Universal Dataset Distillation via Task-Driven Diffusion0
A Large-Scale Study on Video Action Dataset CondensationCode1
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
Adaptive Dataset Quantization0
Going Beyond Feature Similarity: Effective Dataset Distillation based on Class-Aware Conditional Mutual InformationCode0
Efficient Dataset Distillation via Diffusion-Driven Patch Selection for Improved Generalization0
Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillation0
FairDD: Fair Dataset Distillation via Synchronized Matching0
DELT: A Simple Diversity-driven EarlyLate Training for Dataset DistillationCode1
Video Set Distillation: Information Diversification and Temporal Densification0
Data-to-Model Distillation: Data-Efficient Learning FrameworkCode0
Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based PruningCode0
Dataset Distillers Are Good Label Denoisers In the WildCode0
Color-Oriented Redundancy Reduction in Dataset DistillationCode0
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
Robust Offline Reinforcement Learning for Non-Markovian Decision Processes0
Privacy-Preserving Federated Learning via Dataset Distillation0
Emphasizing Discriminative Features for Dataset Distillation in Complex ScenariosCode1
Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation?Code1
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Enhancing Dataset Distillation via Label Inconsistency Elimination and Learning Pattern RefinementCode0
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated MatchingCode0
MetaDD: Boosting Dataset Distillation with Neural Network Architecture-Invariant Generalization0
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep NetworksCode0
Show:102550
← PrevPage 3 of 9Next →

No leaderboard results yet.