SOTAVerified

Self-Knowledge Distillation

Papers

Showing 2650 of 68 papers

TitleStatusHype
Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidanceCode0
Efficient Lung Ultrasound Severity Scoring Using Dedicated Feature ExtractorCode0
Vision-Language Meets the Skeleton: Progressively Distillation with Cross-Modal Knowledge for 3D Action Representation LearningCode0
Eyelid’s Intrinsic Motion-aware Feature Learning for Real-time Eyeblink Detection in the WildCode0
Lightweight Self-Knowledge Distillation with Multi-source Information FusionCode0
Frequency-Guided Masking for Enhanced Vision Self-Supervised LearningCode0
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft LabelsCode0
Improving Sequential Recommendations via Bidirectional Temporal Data Augmentation with Pre-trainingCode0
Deep Clustering with Diffused Sampling and Hardness-aware Self-distillationCode0
Revisiting Knowledge Distillation via Label Smoothing RegularizationCode0
Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios via Self-knowledge DistillationCode0
Guiding Frame-Level CTC Alignments Using Self-knowledge DistillationCode0
You Do Not Need Additional Priors or Regularizers in Retinex-Based Low-Light Image Enhancement0
A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition0
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack0
Confidence Attention and Generalization Enhanced Distillation for Continuous Video Domain Adaptation0
Double Reverse Regularization Network Based on Self-Knowledge Distillation for SAR Object Classification0
Extending Label Smoothing Regularization with Self-Knowledge Distillation0
Extracting knowledge from features with multilevel abstraction0
Generative Dataset Distillation Based on Self-knowledge Distillation0
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding0
Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models0
Iterative Self Knowledge Distillation -- From Pothole Classification to Fine-Grained and COVID Recognition0
Look Backward and Forward: Self-Knowledge Distillation with Bidirectional Decoder for Neural Machine Translation0
MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.