SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 14511475 of 4240 papers

TitleStatusHype
A Transformer-in-Transformer Network Utilizing Knowledge Distillation for Image Recognition0
Exclusivity-Consistency Regularized Knowledge Distillation for Face Recognition0
Few-shot 3D LiDAR Semantic Segmentation for Autonomous Driving0
Enhancing Modality-Agnostic Representations via Meta-Learning for Brain Tumor Segmentation0
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks0
Expediting Contrastive Language-Image Pretraining via Self-distilled Encoders0
Experimentation in Content Moderation using RWKV0
Experimenting with Knowledge Distillation techniques for performing Brain Tumor Segmentation0
Explainability-Driven Leaf Disease Classification Using Adversarial Training and Knowledge Distillation0
Explainable Knowledge Distillation for On-device Chest X-Ray Classification0
Explainable LLM-driven Multi-dimensional Distillation for E-Commerce Relevance Learning0
Explaining Knowledge Distillation by Quantifying the Knowledge0
Enhancing Mapless Trajectory Prediction through Knowledge Distillation0
Compression of end-to-end non-autoregressive image-to-speech system for low-resourced devices0
Compression of Deep Learning Models for Text: A Survey0
Explicit Connection Distillation0
Generalized Supervised Contrastive Learning0
Explicit Knowledge Transfer for Weakly-Supervised Code Generation0
Compression of Acoustic Event Detection Models With Quantized Distillation0
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR0
Few-shot Face Image Translation via GAN Prior Distillation0
FGAD: Self-boosted Knowledge Distillation for An Effective Federated Graph Anomaly Detection Framework0
Compressing Visual-linguistic Model via Knowledge Distillation0
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch0
Enhancing Generalization in Chain of Thought Reasoning for Smaller Models0
Show:102550
← PrevPage 59 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified