SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 19512000 of 4240 papers

TitleStatusHype
Analyzing Compression Techniques for Computer Vision0
On enhancing the robustness of Vision Transformers: Defensive DiffusionCode0
Towards Understanding and Improving Knowledge Distillation for Neural Machine TranslationCode0
AMTSS: An Adaptive Multi-Teacher Single-Student Knowledge Distillation Framework For Multilingual Language Inference0
Black-box Source-free Domain Adaptation via Two-stage Knowledge Distillation0
GSB: Group Superposition Binarization for Vision Transformer with Limited Training SamplesCode0
A Lightweight Domain Adversarial Neural Network Based on Knowledge Distillation for EEG-based Cross-subject Emotion Recognition0
Knowledge distillation with Segment Anything (SAM) model for Planetary Geological Mapping0
Improving Continual Relation Extraction by Distinguishing Analogous SemanticsCode1
Long-Tailed Question Answering in an Open World0
Serial Contrastive Knowledge Distillation for Continual Few-shot Relation ExtractionCode1
A Survey on the Robustness of Computer Vision Models against Common CorruptionsCode0
Explainable Knowledge Distillation for On-device Chest X-Ray Classification0
Multi-Teacher Knowledge Distillation For Text Image Machine TranslationCode0
SRIL: Selective Regularization for Class-Incremental Learning0
FedNoRo: Towards Noise-Robust Federated Learning by Addressing Class Imbalance and Label Noise HeterogeneityCode1
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language ModelsCode1
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing0
Distilling Script Knowledge from Large Language Models for Constrained Language PlanningCode1
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation0
NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge0
Web Content Filtering through knowledge distillation of Large Language Models0
Structural and Statistical Texture Knowledge Distillation for Semantic Segmentation0
Distilled Mid-Fusion Transformer Networks for Multi-Modal Human Activity Recognition0
Smaller3d: Smaller Models for 3D Semantic Segmentation Using Minkowski Engine and Knowledge Distillation MethodsCode0
Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with UncertaintyCode1
SCOTT: Self-Consistent Chain-of-Thought DistillationCode1
A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target TrainingCode0
DeepAqua: Self-Supervised Semantic Segmentation of Wetland Surface Water Extent with SAR Images using Knowledge DistillationCode1
Structure Aware Incremental Learning with Personalized Imitation Weights for Recommender Systems0
Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models0
Detect, Distill and Update: Detect, Distill and Update: Learned DB Systems Facing Out of Distribution DataCode0
Refined Response Distillation for Class-Incremental Player DetectionCode0
Scaffolding a Student to Instill KnowledgeCode0
Multi-to-Single Knowledge Distillation for Point Cloud Semantic SegmentationCode0
CORSD: Class-Oriented Relational Self Distillation0
Ensemble Modeling with Contrastive Knowledge Distillation for Sequential RecommendationCode0
Learning Human-Human Interactions in Images from Weak Textual Supervision0
A Symmetric Dual Encoding Dense Retrieval Framework for Knowledge-Intensive Visual Question AnsweringCode1
Shape-Net: Room Layout Estimation from Panoramic Images Robust to Occlusion using Knowledge Distillation with 3D Shapes as Additional Inputs0
Class Attention Transfer Based Knowledge DistillationCode1
Improving Knowledge Distillation via Transferring Learning AbilityCode0
A Forward and Backward Compatible Framework for Few-shot Class-incremental Pill RecognitionCode0
Interruption-Aware Cooperative Perception for V2X Communication-Aided Autonomous Driving0
Knowledge Distillation from 3D to Bird's-Eye-View for LiDAR Semantic SegmentationCode1
Decouple Non-parametric Knowledge Distillation For End-to-end Speech Translation0
Train Your Own GNN Teacher: Graph-Aware Distillation on Textual GraphsCode1
Word Sense Induction with Knowledge Distillation from BERT0
Attention Weighted Local DescriptorsCode1
Knowledge Distillation Under Ideal Joint Classifier Assumption0
Show:102550
← PrevPage 40 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified