SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 24762500 of 4240 papers

TitleStatusHype
Wasserstein Contrastive Representation Distillation0
Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation0
WAVE: Weight Template for Adaptive Initialization of Variable-sized Models0
Weakly Supervised Cross-lingual Semantic Relation Classification via Knowledge Distillation0
Weakly Supervised Dense Video Captioning via Jointly Usage of Knowledge Distillation and Cross-modal Matching0
Weakly-Supervised Domain Adaptation of Deep Regression Trackers via Reinforced Knowledge Distillation0
Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning0
Weakly Supervised Monocular 3D Detection with a Single-View Image0
Weakly Supervised Semantic Segmentation via Alternative Self-Dual Teaching0
Weak-to-Strong Backdoor Attack for Large Language Models0
Wearable Accelerometer Foundation Models for Health via Knowledge Distillation0
WebChild 2.0 : Fine-Grained Commonsense Knowledge Distillation0
Web Content Filtering through knowledge distillation of Large Language Models0
WebUOT-1M: Advancing Deep Underwater Object Tracking with A Million-Scale Benchmark0
WeChat Neural Machine Translation Systems for WMT200
WeChat Neural Machine Translation Systems for WMT210
WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations0
Weight Averaging: A Simple Yet Effective Method to Overcome Catastrophic Forgetting in Automatic Speech Recognition0
Weight Decay Scheduling and Knowledge Distillation for Active Learning0
Weight Distillation: Transferring the Knowledge in Neural Network Parameters0
Weighted KL-Divergence for Document Ranking Model Refinement0
Weight Squeezing: Reparameterization for Compression and Fast Inference0
Robustness Challenges in Model Distillation and Pruning for Natural Language Understanding0
What do larger image classifiers memorise?0
What Happens When Small Is Made Smaller? Exploring the Impact of Compression on Small Data Pretrained Language Models0
Show:102550
← PrevPage 100 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified