SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 30013025 of 4240 papers

TitleStatusHype
Domain-invariant Feature Exploration for Domain Generalization0
HIRE: Distilling High-order Relational Knowledge From Heterogeneous Graph Neural Networks0
Spatial-Channel Token Distillation for Vision MLPsCode0
Handling Data Heterogeneity in Federated Learning via Knowledge Distillation and FusionCode0
Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free ReplayCode0
Federated Semi-Supervised Domain Adaptation via Knowledge Transfer0
TinyViT: Fast Pretraining Distillation for Small Vision Transformers0
Aware of the History: Trajectory Forecasting with the Local Behavior Data0
Model Compression for Resource-Constrained Mobile Robots0
Many-to-One Knowledge Distillation of Real-Time Epileptic Seizure Detection for Low-Power Wearable Internet of Things Systems0
Knowledge distillation with a class-aware loss for endoscopic disease detection0
Context Unaware Knowledge Distillation for Image RetrievalCode0
Learning Knowledge Representation with Meta Knowledge Distillation for Single Image Super-Resolution0
Subclass Knowledge Distillation with Known Subclass Labels0
TSPipe: Learn from Teacher Faster with PipelinesCode0
SSMTL++: Revisiting Self-Supervised Multi-Task Learning for Video Anomaly Detection0
Deep versus Wide: An Analysis of Student Architectures for Task-Agnostic Knowledge Distillation of Self-Supervised Speech Models0
Rethinking Attention Mechanism in Time Series Classification0
Dynamic Low-Resolution Distillation for Cost-Efficient End-to-End Text Spotting0
SlimSeg: Slimmable Semantic Segmentation with Boundary Supervision0
Rich Feature Distillation with Feature Affinity Module for Efficient Image Dehazing0
DSPNet: Towards Slimmable Pretrained Networks based on Discriminative Self-supervised Learning0
Cross-Architecture Knowledge Distillation0
Normalized Feature Distillation for Semantic Segmentation0
Distilled Non-Semantic Speech Embeddings with Binary Neural Networks for Low-Resource DevicesCode0
Show:102550
← PrevPage 121 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified