SOTAVerified

Self-Knowledge Distillation

Papers

Showing 125 of 68 papers

TitleStatusHype
Effective Whole-body Pose Estimation with Two-stages DistillationCode4
Towards A Generalizable Pathology Foundation Model via Unified Knowledge DistillationCode2
Graph-based Knowledge Distillation: A survey and experimental evaluationCode1
Multimodality Multi-Lead ECG Arrhythmia Classification using Self-Supervised LearningCode1
BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge DistillationCode1
ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural NetworksCode1
Self-Knowledge Distillation with Progressive Refinement of TargetsCode1
Regularizing Class-wise Predictions via Self-knowledge DistillationCode1
Noisy Self-Knowledge Distillation for Text SummarizationCode1
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge DistillationCode1
Three-Stream Temporal-Shift Attention Network Based on Self-Knowledge Distillation for Micro-Expression RecognitionCode1
DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervisionCode1
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-DistillationCode1
CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with Perturbation Strategies and Knowledge DistillationCode1
MixSKD: Self-Knowledge Distillation from Mixup for Image RecognitionCode1
Preservation of the Global Knowledge by Not-True Distillation in Federated LearningCode1
FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated LearningCode1
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial TrainingCode1
Incorporating Graph Information in Transformer-based AMR ParsingCode0
Efficient Lung Ultrasound Severity Scoring Using Dedicated Feature ExtractorCode0
Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios via Self-knowledge DistillationCode0
Guiding Frame-Level CTC Alignments Using Self-knowledge DistillationCode0
Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidanceCode0
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft LabelsCode0
Frequency-Guided Masking for Enhanced Vision Self-Supervised LearningCode0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.