SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 34013425 of 4240 papers

TitleStatusHype
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher0
Robustness Challenges in Model Distillation and Pruning for Natural Language Understanding0
A Short Study on Compressing Decoder-Based Language Models0
Know your tools well: Better and faster QA with synthetic examples0
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm0
From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation0
Multilingual Neural Machine Translation:Can Linguistic Hierarchies Help?0
Kronecker Decomposition for GPT Compression0
Language Modelling via Learning to Rank0
False Negative Distillation and Contrastive Learning for Personalized Outfit Recommendation0
CONetV2: Efficient Auto-Channel Size Optimization for CNNsCode0
Compact CNN Models for On-device Ocular-based User Recognition in Mobile Devices0
Rectifying the Data Bias in Knowledge Distillation0
Towards Streaming Egocentric Action Anticipation0
Towards Data-Free Domain GeneralizationCode0
Visualizing the embedding space to explain the effect of knowledge distillation0
Cross-modal Knowledge Distillation for Vision-to-Sensor Action RecognitionCode0
Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models0
Peer Collaborative Learning for Polyphonic Sound Event Detection0
Online Hyperparameter Meta-Learning with Hypergradient Distillation0
Inter-Domain Alignment for Predicting High-Resolution Brain Networks Using Teacher-Student LearningCode0
On the Interplay Between Sparsity, Naturalness, Intelligibility, and Prosody in Speech Synthesis0
Student Helping Teacher: Teacher Evolution via Self-Knowledge DistillationCode0
Deep Neural Compression Via Concurrent Pruning and Self-Distillation0
Improving Neural Ranking via Lossless Knowledge Distillation0
Show:102550
← PrevPage 137 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified