SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 41764200 of 4240 papers

TitleStatusHype
MotherNets: Rapid Deep Ensemble Learning0
Label Denoising with Large Ensembles of Heterogeneous Neural Networks0
RDPD: Rich Data Helps Poor Data via ImitationCode0
Lifelong Learning via Progressive Distillation and Retrospection0
Attention-Guided Answer Distillation for Machine Reading Comprehension0
Whole-Slide Mitosis Detection in H&E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks0
Cooperative Denoising for Distantly Supervised Relation Extraction0
SlimNets: An Exploration of Deep Model Compression and AccelerationCode0
Self-supervised Knowledge Distillation Using Singular Value DecompositionCode0
Revisiting Distillation and Incremental Classifier LearningCode0
Distillation Techniques for Pseudo-rehearsal Based Incremental LearningCode0
Gradient Adversarial Training of Neural Networks0
Knowledge Distillation by On-the-Fly Native EnsembleCode0
Coupled End-to-End Transfer Learning With Generalized Fisher Information0
Collaborative Learning for Deep Neural Networks0
Channel Gating Neural NetworksCode1
A novel channel pruning method for deep neural network compression0
Theory and Experiments on Vector Quantized AutoencodersCode0
Visual Relationship Detection Based on Guided Proposals and Semantic Knowledge Distillation0
Recurrent knowledge distillation0
Knowledge Distillation in Generations: More Tolerant Teachers Educate Better Students0
Knowledge Distillation with Adversarial Samples Supporting Decision BoundaryCode0
Born Again Neural NetworksCode0
Response Ranking with Deep Matching Networks and External Knowledge in Information-seeking Conversation SystemsCode0
Neural Compatibility Modeling with Attentive Knowledge Distillation0
Show:102550
← PrevPage 168 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified