SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 951975 of 4240 papers

TitleStatusHype
Look Around and Refer: 2D Synthetic Semantics Knowledge Distillation for 3D Visual GroundingCode1
MetricGAN-OKD: Multi-Metric Optimization of MetricGAN via Online Knowledge Distillation for Speech EnhancementCode1
Quantization Variation: A New Perspective on Training Transformers with Low-Bit PrecisionCode1
DE-RRD: A Knowledge Distillation Framework for Recommender SystemCode1
Multi-View Fusion and Distillation for Subgrade Distresses Detection based on 3D-GPRCode1
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff PerspectiveCode1
Knowledge Distillation Meets Self-SupervisionCode1
Learning Lightweight Lane Detection CNNs by Self Attention DistillationCode0
An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question AnsweringCode0
Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language ModelsCode0
Learning Efficient Detector with Semi-supervised Adaptive DistillationCode0
Learning from Noisy Crowd Labels with LogicsCode0
Bridging Modalities: Knowledge Distillation and Masked Training for Translating Multi-Modal Emotion Recognition to Uni-Modal, Speech-Only Emotion RecognitionCode0
Bridging Dimensions: Confident Reachability for High-Dimensional ControllersCode0
ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake ImagesCode0
Leaning Compact and Representative Features for Cross-Modality Person Re-IdentificationCode0
Learning Deep and Compact Models for Gesture RecognitionCode0
Language-Universal Adapter Learning with Knowledge Distillation for End-to-End Multilingual Speech RecognitionCode0
Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion RecognitionCode0
An Efficient Memory Module for Graph Few-Shot Class-Incremental LearningCode0
An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task LearningCode0
Language Model Knowledge Distillation for Efficient Question Answering in SpanishCode0
Born Again Neural NetworksCode0
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data GenerationCode0
KS-DETR: Knowledge Sharing in Attention Learning for Detection TransformerCode0
Show:102550
← PrevPage 39 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified