SOTAVerified

Self-Knowledge Distillation

Papers

Showing 150 of 68 papers

TitleStatusHype
Effective Whole-body Pose Estimation with Two-stages DistillationCode4
Towards A Generalizable Pathology Foundation Model via Unified Knowledge DistillationCode2
Three-Stream Temporal-Shift Attention Network Based on Self-Knowledge Distillation for Micro-Expression RecognitionCode1
Graph-based Knowledge Distillation: A survey and experimental evaluationCode1
Preservation of the Global Knowledge by Not-True Distillation in Federated LearningCode1
Self-Knowledge Distillation with Progressive Refinement of TargetsCode1
Noisy Self-Knowledge Distillation for Text SummarizationCode1
DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervisionCode1
Multimodality Multi-Lead ECG Arrhythmia Classification using Self-Supervised LearningCode1
MixSKD: Self-Knowledge Distillation from Mixup for Image RecognitionCode1
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial TrainingCode1
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-DistillationCode1
BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge DistillationCode1
CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with Perturbation Strategies and Knowledge DistillationCode1
Regularizing Class-wise Predictions via Self-knowledge DistillationCode1
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge DistillationCode1
FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated LearningCode1
ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural NetworksCode1
Siamese Sleep Transformer For Robust Sleep Stage Scoring With Self-knowledge Distillation and Selective Batch Sampling0
A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition0
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack0
Confidence Attention and Generalization Enhanced Distillation for Continuous Video Domain Adaptation0
Double Reverse Regularization Network Based on Self-Knowledge Distillation for SAR Object Classification0
Extending Label Smoothing Regularization with Self-Knowledge Distillation0
Extracting knowledge from features with multilevel abstraction0
Generative Dataset Distillation Based on Self-knowledge Distillation0
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding0
Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models0
Iterative Self Knowledge Distillation -- From Pothole Classification to Fine-Grained and COVID Recognition0
Look Backward and Forward: Self-Knowledge Distillation with Bidirectional Decoder for Neural Machine Translation0
MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation0
SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shots0
SELF-KNOWLEDGE DISTILLATION ADVERSARIAL ATTACK0
Self-Knowledge Distillation based Self-Supervised Learning for Covid-19 Detection from Chest X-Ray Images0
Self-Knowledge Distillation for Learning Ambiguity0
Self-Knowledge Distillation for Surgical Phase Recognition0
Self-Knowledge Distillation in Natural Language Processing0
Self-Knowledge Distillation via Dropout0
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation0
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly Supervised Object Detection0
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks0
TASKED: Transformer-based Adversarial learning for human activity recognition using wearable sensors via Self-KnowledgE Distillation0
Three Factors to Improve Out-of-Distribution Detection0
Towards Satellite Non-IID Imagery: A Spectral Clustering-Assisted Federated Learning Approach0
Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation0
Weakly Supervised Monocular 3D Detection with a Single-View Image0
X Modality Assisting RGBT Object Tracking0
xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation0
You Do Not Need Additional Priors or Regularizers in Retinex-Based Low-Light Image Enhancement0
Tackling Data Heterogeneity in Federated Learning through Knowledge Distillation with Inequitable AggregationCode0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.