SOTAVerified

Dataset Distillation

Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.).

Papers

Showing 151200 of 216 papers

TitleStatusHype
Video Dataset Condensation with Diffusion Models0
Video Set Distillation: Information Diversification and Temporal Densification0
FairDD: Fair Dataset Distillation via Synchronized Matching0
Federated Virtual Learning on Heterogeneous Data with Local-global Distillation0
FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural Networks0
FedWSIDD: Federated Whole Slide Image Classification via Dataset Distillation0
Few-Shot Dataset Distillation via Translative Pre-Training0
Finding Stable Subnetworks at Initialization with Dataset Distillation0
Privacy-Preserving Federated Learning via Dataset Distillation0
FocusDD: Real-World Scene Infusion for Robust Dataset Distillation0
FYI: Flip Your Images for Dataset Distillation0
Generative Dataset Distillation: Balancing Global Structure and Local Details0
Generative Dataset Distillation Based on Self-knowledge Distillation0
Generative Dataset Distillation using Min-Max Diffusion Model0
Heavy Labels Out! Dataset Distillation with Label Space Lightening0
Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation0
Hierarchical Features Matter: A Deep Exploration of Progressive Parameterization Method for Dataset Distillation0
Hyperbolic Dataset Distillation0
Image Dataset Compression Based on Matrix Product States0
Importance-Aware Adaptive Dataset Distillation0
Information-Guided Diffusion Sampling for Dataset Distillation0
Knowledge Distillation and Dataset Distillation of Large Language Models: Emerging Trends, Challenges, and Future Directions0
Knowledge Hierarchy Guided Biological-Medical Dataset Distillation for Domain LLM Training0
Label-Augmented Dataset Distillation0
Latent Dataset Distillation with Diffusion Models0
Exploring the Impact of Dataset Bias on Dataset DistillationCode0
Dataset Distillation for Offline Reinforcement LearningCode0
Exploring Multilingual Text Data DistillationCode0
Exploring Generalized Gait Recognition: Reducing Redundancy and Noise within Indoor and Outdoor DatasetsCode0
TD3: Tucker Decomposition Based Dataset Distillation Method for Sequential RecommendationCode0
Dataset Distillers Are Good Label Denoisers In the WildCode0
Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated MatchingCode0
Dataset distillation for memorized data: Soft labels can leak held-out teacher knowledgeCode0
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
Enhancing Dataset Distillation via Non-Critical Region RefinementCode0
ATOM: Attention Mixer for Efficient Dataset DistillationCode0
Enhancing Dataset Distillation via Label Inconsistency Elimination and Learning Pattern RefinementCode0
Dataset Distillation with Infinitely Wide Convolutional NetworksCode0
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert TrajectoriesCode0
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep NetworksCode0
Does Training with Synthetic Data Truly Protect Privacy?Code0
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight AdjustmentCode0
Towards Adversarially Robust Dataset Distillation by Curvature RegularizationCode0
Neural Spectral Decomposition for Dataset DistillationCode0
Distributional Dataset Distillation with Subtask DecompositionCode0
Going Beyond Feature Similarity: Effective Dataset Distillation based on Class-Aware Conditional Mutual InformationCode0
Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based PruningCode0
UniDetox: Universal Detoxification of Large Language Models via Dataset DistillationCode0
Dataset Distillation via Adversarial Prediction MatchingCode0
Show:102550
← PrevPage 4 of 5Next →

No leaderboard results yet.