SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 19762000 of 4240 papers

TitleStatusHype
Improving the Transferability of Adversarial Examples by Inverse Knowledge Distillation0
Improving Video Model Transfer With Dynamic Representation Learning0
Efficient Machine Translation with Model Pruning and Quantization0
Combining Curriculum Learning and Knowledge Distillation for Dialogue Generation0
Improving Zero-Shot Multilingual Text Generation via Iterative Distillation0
Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks0
In-Context Learning Distillation for Efficient Few-Shot Fine-Tuning0
ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression0
Incorporating Ultrasound Tongue Images for Audio-Visual Speech Enhancement through Knowledge Distillation0
KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation0
Incremental Classifier Learning Based on PEDCC-Loss and Cosine Distance0
Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning0
Incremental Knowledge Based Question Answering0
Incremental Learning for End-to-End Automatic Speech Recognition0
Direct Distillation between Different Domains0
Kendall's τ Coefficient for Logits Distillation0
Knowledge Adaptation for Efficient Semantic Segmentation0
Efficient Knowledge Distillation via Curriculum Extraction0
Efficient Knowledge Distillation of SAM for Medical Image Segmentation0
Collective Wisdom: Improving Low-resource Neural Machine Translation using Adaptive Knowledge Distillation0
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights0
Incrementer: Transformer for Class-Incremental Semantic Segmentation With Knowledge Distillation Focusing on Old Class0
DiReDi: Distillation and Reverse Distillation for AIoT Applications0
Collective Knowledge Graph Completion with Mutual Knowledge Distillation0
Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs0
Show:102550
← PrevPage 80 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified