SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 35513600 of 4240 papers

TitleStatusHype
SIKeD: Self-guided Iterative Knowledge Distillation for mathematical reasoningCode0
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge DistillationCode0
Less-supervised learning with knowledge distillation for sperm morphology analysisCode0
Knowledge Distillation Performs Partial Variance ReductionCode0
Better Teacher Better Student: Dynamic Prior Knowledge for Knowledge DistillationCode0
Applying Knowledge Distillation to Improve Weed Mapping With DronesCode0
Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge DistillationCode0
DynaMMo: Dynamic Model Merging for Efficient Class Incremental Learning for Medical ImagesCode0
Continual Contrastive Learning for Image ClassificationCode0
Dynamic Sub-graph Distillation for Robust Semi-supervised Continual LearningCode0
Dynamic Rectification Knowledge DistillationCode0
DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action RecognitionCode0
Projected Latent Distillation for Data-Agnostic Consolidation in Distributed Continual LearningCode0
Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned DataCode0
An Unsupervised Multiple-Task and Multiple-Teacher Model for Cross-lingual Named Entity RecognitionCode0
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head OptimizationCode0
A Comprehensive Overhaul of Feature DistillationCode0
Leveraging Foundation Models via Knowledge Distillation in Multi-Object Tracking: Distilling DINOv2 Features to FairMOTCode0
Leveraging Knowledge Distillation for Efficient Deep Reinforcement Learning in Resource-Constrained EnvironmentsCode0
Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios via Self-knowledge DistillationCode0
Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasetsCode0
Leveraging Large Language Models for Active Merchant Non-player CharactersCode0
TextKD-GAN: Text Generation using KnowledgeDistillation and Generative Adversarial NetworksCode0
Continual Coarse-to-Fine Domain Adaptation in Semantic SegmentationCode0
Leveraging Topological Guidance for Improved Knowledge DistillationCode0
Dual Correction Strategy for Ranking Distillation in Top-N Recommender SystemCode0
Knowledge Distillation of Russian Language Models with Reduction of VocabularyCode0
Knowledge Distillation Layer that Lets the Student DecideCode0
DSMix: Distortion-Induced Sensitivity Map Based Pre-training for No-Reference Image Quality AssessmentCode0
DSG-KD: Knowledge Distillation from Domain-Specific to General Language ModelsCode0
Better Supervisory Signals by Observing Learning PathsCode0
Knowledge Distillation in RNN-Attention Models for Early Prediction of Student PerformanceCode0
DS_FusionNet: Dynamic Dual-Stream Fusion with Bidirectional Knowledge Distillation for Plant Disease RecognitionCode0
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
Prototype-guided Cross-task Knowledge Distillation for Large-scale ModelsCode0
Do You Remember . . . the Future? Weak-to-Strong generalization in 3D Object DetectionCode0
Context Unaware Knowledge Distillation for Image RetrievalCode0
Proxy-Anchor and EVT-Driven Continual Learning Method for Generalized Category DiscoveryCode0
BEiT v2: Masked Image Modeling with Vector-Quantized Visual TokenizersCode0
Knowledge Distillation from Single to Multi Labels: an Empirical StudyCode0
PrUE: Distilling Knowledge from Sparse Teacher NetworksCode0
Domain-Lifelong Learning for Dialogue State Tracking via Knowledge Preservation NetworksCode0
Few Sample Knowledge Distillation for Efficient Network CompressionCode0
Knowledge Distillation from Cross Teaching Teachers for Efficient Semi-Supervised Abdominal Organ Segmentation in CTCode0
Knowledge Distillation For Wireless Edge LearningCode0
Light Multi-segment Activation for Model CompressionCode0
Lightning Fast Video Anomaly Detection via Adversarial Knowledge DistillationCode0
Domain Knowledge Transferring for Pre-trained Language Model via Calibrated Activation Boundary DistillationCode0
LightPath: Lightweight and Scalable Path Representation LearningCode0
Domain Generalization for Crop Segmentation with Standardized Ensemble Knowledge DistillationCode0
Show:102550
← PrevPage 72 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified