SOTAVerified

Self-Knowledge Distillation

Papers

Showing 6168 of 68 papers

TitleStatusHype
Improving Sequential Recommendations via Bidirectional Temporal Data Augmentation with Pre-trainingCode0
Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidanceCode0
Distilled Gradual Pruning with Pruned Fine-tuningCode0
Deep Clustering with Diffused Sampling and Hardness-aware Self-distillationCode0
Vision-Language Meets the Skeleton: Progressively Distillation with Cross-Modal Knowledge for 3D Action Representation LearningCode0
RAIN: RegulArization on Input and Network for Black-Box Domain AdaptationCode0
Student Helping Teacher: Teacher Evolution via Self-Knowledge DistillationCode0
Tackling Data Heterogeneity in Federated Learning through Knowledge Distillation with Inequitable AggregationCode0
Show:102550
← PrevPage 7 of 7Next →

No leaderboard results yet.