SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 30263050 of 4240 papers

TitleStatusHype
1st Place Solution to the EPIC-Kitchens Action Anticipation Challenge 20220
Improving Streaming End-to-End ASR on Transformer-based Causal Models with Encoder States Revision Strategies0
Low-resource Low-footprint Wake-word Detection using Knowledge Distillation0
GLANCE: Global to Local Architecture-Neutral Concept-based ExplanationsCode0
PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient0
A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy0
ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised Memory-efficient Medical Image SegmentationCode0
VEM^2L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion0
FasterAI: A Lightweight Library for Creating Sparse Neural Networks0
PrUE: Distilling Knowledge from Sparse Teacher NetworksCode0
Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation0
Lost in Distillation: A Case Study in Toxicity Modeling0
Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate from the Perspective of DistilBERT0
Asynchronous Convergence in Multi-Task Learning via Knowledge Distillation from Converged Tasks0
End-to-End Simultaneous Speech Translation with Pretraining and Distillation: Huawei Noah’s System for AutoSimTranS 20220
KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation0
ListBERT: Learning to Rank E-commerce products with Listwise BERT0
Extreme compression of sentence-transformer ranker models: faster inference, longer battery life, and less storage on edge devices0
Knowledge Distillation of Transformer-based Language Models Revisited0
Cooperative Retriever and Ranker in Deep RecommendersCode0
QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design0
Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search0
Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification0
Mixed Sample Augmentation for Online Distillation0
Feature Representation Learning for Robust Retinal Disease Detection from Optical Coherence Tomography ImagesCode0
Show:102550
← PrevPage 122 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified