SOTAVerified

Self-Knowledge Distillation

Papers

Showing 150 of 68 papers

TitleStatusHype
Effective Whole-body Pose Estimation with Two-stages DistillationCode4
Towards A Generalizable Pathology Foundation Model via Unified Knowledge DistillationCode2
Regularizing Class-wise Predictions via Self-knowledge DistillationCode1
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial TrainingCode1
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-DistillationCode1
BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge DistillationCode1
Self-Knowledge Distillation with Progressive Refinement of TargetsCode1
CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with Perturbation Strategies and Knowledge DistillationCode1
FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated LearningCode1
Three-Stream Temporal-Shift Attention Network Based on Self-Knowledge Distillation for Micro-Expression RecognitionCode1
Graph-based Knowledge Distillation: A survey and experimental evaluationCode1
DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervisionCode1
MixSKD: Self-Knowledge Distillation from Mixup for Image RecognitionCode1
Multimodality Multi-Lead ECG Arrhythmia Classification using Self-Supervised LearningCode1
Noisy Self-Knowledge Distillation for Text SummarizationCode1
Preservation of the Global Knowledge by Not-True Distillation in Federated LearningCode1
ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural NetworksCode1
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge DistillationCode1
Student Helping Teacher: Teacher Evolution via Self-Knowledge DistillationCode0
Tackling Data Heterogeneity in Federated Learning through Knowledge Distillation with Inequitable AggregationCode0
Distilled Gradual Pruning with Pruned Fine-tuningCode0
Incorporating Graph Information in Transformer-based AMR ParsingCode0
RAIN: RegulArization on Input and Network for Black-Box Domain AdaptationCode0
SalNAS: Efficient Saliency-prediction Neural Architecture Search with self-knowledge distillationCode0
Robust and Accurate Object Detection via Self-Knowledge DistillationCode0
Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidanceCode0
Efficient Lung Ultrasound Severity Scoring Using Dedicated Feature ExtractorCode0
Vision-Language Meets the Skeleton: Progressively Distillation with Cross-Modal Knowledge for 3D Action Representation LearningCode0
Eyelid’s Intrinsic Motion-aware Feature Learning for Real-time Eyeblink Detection in the WildCode0
Lightweight Self-Knowledge Distillation with Multi-source Information FusionCode0
Frequency-Guided Masking for Enhanced Vision Self-Supervised LearningCode0
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft LabelsCode0
Improving Sequential Recommendations via Bidirectional Temporal Data Augmentation with Pre-trainingCode0
Deep Clustering with Diffused Sampling and Hardness-aware Self-distillationCode0
Revisiting Knowledge Distillation via Label Smoothing RegularizationCode0
Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios via Self-knowledge DistillationCode0
Guiding Frame-Level CTC Alignments Using Self-knowledge DistillationCode0
You Do Not Need Additional Priors or Regularizers in Retinex-Based Low-Light Image Enhancement0
A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition0
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack0
Confidence Attention and Generalization Enhanced Distillation for Continuous Video Domain Adaptation0
Double Reverse Regularization Network Based on Self-Knowledge Distillation for SAR Object Classification0
Extending Label Smoothing Regularization with Self-Knowledge Distillation0
Extracting knowledge from features with multilevel abstraction0
Generative Dataset Distillation Based on Self-knowledge Distillation0
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding0
Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models0
Iterative Self Knowledge Distillation -- From Pothole Classification to Fine-Grained and COVID Recognition0
Look Backward and Forward: Self-Knowledge Distillation with Bidirectional Decoder for Neural Machine Translation0
MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.