SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 33013350 of 4240 papers

TitleStatusHype
Model Compression for Resource-Constrained Mobile Robots0
Model Compression Methods for YOLOv5: A Review0
Model compression using knowledge distillation with integrated gradients0
Model Compression Using Optimal Transport0
Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System0
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System0
Model Distillation for Faithful Explanations of Medical Code Predictions0
Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification0
On Cross-Layer Alignment for Model Fusion of Heterogeneous Neural Networks0
A Light-weight Deep Human Activity Recognition Algorithm Using Multi-knowledge Distillation0
Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation0
Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples0
Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation0
Model Stitching by Functional Latent Alignment0
Modifying Final Splits of Classification Tree for Fine-tuning Subpopulation Target in Policy Making0
Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference0
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation0
MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router0
MoKD: Multi-Task Optimization for Knowledge Distillation0
MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation0
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation0
Mono2Stereo: Monocular Knowledge Transfer for Enhanced Stereo Matching0
More From Less: Self-Supervised Knowledge Distillation for Routine Histopathology Data0
Motion Pyramid Networks for Accurate and Efficient Cardiac Motion Estimation0
MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders0
MS-KD: Multi-Organ Segmentation with Multiple Binary-Labeled Datasets0
MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events0
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution0
MulDE: Multi-teacher Knowledge Distillation for Low-dimensional Knowledge Graph Embeddings0
Multi-adversarial Faster-RCNN with Paradigm Teacher for Unrestricted Object Detection0
Multi-Branch Mutual-Distillation Transformer for EEG-Based Seizure Subtype Classification0
Multi-Channel Multi-Domain based Knowledge Distillation Algorithm for Sleep Staging with Single-Channel EEG0
Cultural Commonsense Knowledge for Intercultural Dialogues0
Multi-Document Financial Question Answering using LLMs0
Multi-Frame Self-Supervised Depth Estimation with Multi-Scale Feature Fusion in Dynamic Scenes0
Multi-Frame to Single-Frame: Knowledge Distillation for 3D Object Detection0
Multi-Grained Knowledge Distillation for Named Entity Recognition0
Multi-Granularity Contrastive Knowledge Distillation for Multimodal Named Entity Recognition0
Multi-Granularity Semantic Revision for Large Language Model Distillation0
Multi-head Knowledge Distillation for Model Compression0
Multi-label Class Incremental Emotion Decoding with Augmented Emotional Semantics Learning0
Multi-label Contrastive Predictive Coding0
Multi-label Emotion Analysis in Conversation via Multimodal Knowledge Distillation0
Multi-level Distillation of Semantic Knowledge for Pre-training Multilingual Language Model0
Multilingual Neural Machine Translation:Can Linguistic Hierarchies Help?0
Multilingual Neural Machine Translation: Can Linguistic Hierarchies Help?0
Multi-MLLM Knowledge Distillation for Out-of-Context News Detection0
Multimodal Commonsense Knowledge Distillation for Visual Question Answering0
Multi-modal Cross-domain Self-supervised Pre-training for fMRI and EEG Fusion0
Multi-Modal Few-Shot Object Detection with Meta-Learning-Based Cross-Modal Prompting0
Show:102550
← PrevPage 67 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified