SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 576600 of 4240 papers

TitleStatusHype
Asterisk*: Keep it Simple0
Knowledge Distillation Neural Network for Predicting Car-following Behaviour of Human-driven and Autonomous Vehicles0
Towards Lifelong Few-Shot Customization of Text-to-Image Diffusion0
Performance-Guided LLM Knowledge Distillation for Efficient Text Classification at Scale0
Towards Competitive Search Relevance For Inference-Free Learned Sparse RetrieversCode1
GazeGen: Gaze-Driven User Interaction for Visual Content Generation0
Towards Personalized Federated Learning via Comprehensive Knowledge Distillation0
Transformer-Based Fault-Tolerant Control for Fixed-Wing UAVs Using Knowledge Distillation and In-Context Adaptation0
Centerness-based Instance-aware Knowledge Distillation with Task-wise Mutual Lifting for Object Detection on Drone Imagery0
Multimodal Commonsense Knowledge Distillation for Visual Question Answering0
Training on the Test Model: Contamination in Ranking DistillationCode0
Decoupling Dark Knowledge via Block-wise Logit Distillation for Feature-level Alignment0
Towards Building Secure UAV Navigation with FHE-aware Knowledge Distillation0
Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation0
On the Impact of White-box Deployment Strategies for Edge AI on Latency and Model Performance0
Semantic Knowledge Distillation for Onboard Satellite Earth Observation Image ClassificationCode0
The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation0
IP-MOT: Instance Prompt Learning for Cross-Domain Multi-Object Tracking0
Unsupervised Training of a Dynamic Context-Aware Deep Denoising Framework for Low-Dose Fluoroscopic ImagingCode0
Unveiling Context-Aware Criteria in Self-Assessing LLMs0
Knowledge Distillation for Real-Time Classification of Early Media in Voice Communications0
Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA0
KD-LoRA: A Hybrid Approach to Efficient Fine-Tuning with LoRA and Knowledge DistillationCode1
Deep Learning for Medical Text Processing: BERT Model Fine-Tuning and Comparative Study0
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models0
Show:102550
← PrevPage 24 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified