SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 30013050 of 4240 papers

TitleStatusHype
Knowledge Distillation for Object Detection via Rank Mimicking and Prediction-guided Feature Imitation0
Boosting Contrastive Learning with Relation Knowledge Distillation0
Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge DistillationCode0
A Contrastive Distillation Approach for Incremental Semantic Segmentation in Aerial ImagesCode1
Improving Neural Cross-Lingual Summarization via Employing Optimal Transport Distance for Knowledge DistillationCode1
ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake ImagesCode0
Safe Distillation Box0
CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks0
Extracting knowledge from features with multilevel abstraction0
KDCTime: Knowledge Distillation with Calibration on InceptionTime for Time-series Classification0
Tiny-NewsRec: Effective and Efficient PLM-based News RecommendationCode1
FedRAD: Federated Robust Adaptive Distillation0
A Fast Knowledge Distillation Framework for Visual RecognitionCode1
Information Theoretic Representation DistillationCode1
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single ImageCode1
Distilling Meta Knowledge on Heterogeneous Graph for Illicit Drug Trafficker Detection on Social MediaCode1
Aligned Structured Sparsity Learning for Efficient Image Super-ResolutionCode1
Shapeshifter: a Parameter-efficient Transformer using Factorized Reshaped MatricesCode0
Handling Long-tailed Feature Distribution in AdderNets0
Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge DistillationCode1
Comprehensive Knowledge Distillation with Causal InterventionCode1
Analyzing the Confidentiality of Undistillable Teachers in Knowledge DistillationCode0
Adversarial Teacher-Student Representation Learning for Domain GeneralizationCode0
Unsupervised Representation Transfer for Small Networks: I Believe I Can Distill On-the-Fly0
Using a GAN to Generate Adversarial Examples to Facial Image Recognition0
Improved Knowledge Distillation via Adversarial Collaboration0
Efficient Federated Learning for AIoT Applications Using Knowledge Distillation0
ESGN: Efficient Stereo Geometry Network for Fast 3D Object Detection0
WiFi-based Multi-task SensingCode1
Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs0
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge DistillationCode1
Self-slimmed Vision TransformerCode1
Domain-Agnostic Clustering with Self-Distillation0
Semi-Online Knowledge DistillationCode0
Focal and Global Knowledge Distillation for DetectorsCode1
Hierarchical Knowledge Distillation for Dialogue Sequence Labeling0
Contrast-reconstruction Representation Learning for Self-supervised Skeleton-based Action Recognition0
Local-Selective Feature Distillation for Single Image Super-Resolution0
Teacher-Student Training and Triplet Loss to Reduce the Effect of Drastic Face Occlusion0
Toxicity Detection can be Sensitive to the Conversational Context0
Dynamically pruning segformer for efficient semantic segmentation0
Hierarchical Knowledge Guided Learning for Real-world Retinal Diseases Recognition0
An Unsupervised Multiple-Task and Multiple-Teacher Model for Cross-lingual Named Entity RecognitionCode0
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation0
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation0
Multi-Granularity Contrastive Knowledge Distillation for Multimodal Named Entity Recognition0
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation0
A Flexible Multi-Task Model for BERT Serving0
Compositional Data Augmentation for Abstractive Conversation Summarization0
Deep-to-bottom Weights Decay: A Systemic Knowledge Review Learning Technique for Transformer Layers in Knowledge Distillation0
Show:102550
← PrevPage 61 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified