SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 41514200 of 4240 papers

TitleStatusHype
PaCKD: Pattern-Clustered Knowledge Distillation for Compressing Memory Access Prediction ModelsCode0
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer CompressionCode0
Towards Understanding and Improving Knowledge Distillation for Neural Machine TranslationCode0
ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation GeneralizationCode0
Data-Free Adversarial DistillationCode0
ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised Memory-efficient Medical Image SegmentationCode0
Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge DistillationCode0
Ensemble Modeling with Contrastive Knowledge Distillation for Sequential RecommendationCode0
Data exploitation: multi-task learning of object detection and semantic segmentation on partially annotated dataCode0
Parallel Blockwise Knowledge Distillation for Deep Neural Network CompressionCode0
Align-to-Distill: Trainable Attention Alignment for Knowledge Distillation in Neural Machine TranslationCode0
DASK: Distribution Rehearsing via Adaptive Style Kernel Learning for Exemplar-Free Lifelong Person Re-IdentificationCode0
DAD++: Improved Data-free Test Time Adversarial DefenseCode0
Ensemble Learning via Knowledge Transfer for CTR PredictionCode0
Aligning (Medical) LLMs for (Counterfactual) FairnessCode0
A Tailored Pre-Training Model for Task-Oriented Dialog GenerationCode0
Ensemble Knowledge Distillation for Learning Improved and Efficient NetworksCode0
Ensemble diverse hypotheses and knowledge distillation for unsupervised cross-subject adaptationCode0
Patient Knowledge Distillation for BERT Model CompressionCode0
Ensemble Distillation for Robust Model Fusion in Federated LearningCode0
Enhancing Weakly-Supervised Histopathology Image Segmentation with Knowledge Distillation on MIL-Based Pseudo-LabelsCode0
Enhancing TinyBERT for Financial Sentiment Analysis Using GPT-Augmented FinBERT DistillationCode0
DAdEE: Unsupervised Domain Adaptation in Early Exit PLMsCode0
Self-supervised Knowledge Distillation Using Singular Value DecompositionCode0
Enhancing Scene Classification in Cloudy Image Scenarios: A Collaborative Transfer Method with Information Regulation Mechanism using Optical Cloud-Covered and SAR Remote Sensing ImagesCode0
Enhancing New-item Fairness in Dynamic Recommender SystemsCode0
D^2TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal SummarizationCode0
cViL: Cross-Lingual Training of Vision-Language Models using Knowledge DistillationCode0
Teaching MLPs to Master Heterogeneous Graph-Structured Knowledge for Efficient and Accurate InferenceCode0
Uniformity First: Uniformity-aware Test-time Adaptation of Vision-language Models against Image CorruptionCode0
Meta-Learned Modality-Weighted Knowledge Distillation for Robust Multi-Modal Learning with Missing DataCode0
Customizing Synthetic Data for Data-Free Student LearningCode0
Enhancing Low-Resource NMT with a Multilingual Encoder and Knowledge Distillation: A Case StudyCode0
CXR Segmentation by AdaIN-based Domain Adaptation and Knowledge DistillationCode0
Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D TransformersCode0
Unifying Heterogeneous Classifiers with DistillationCode0
Blind Knowledge Distillation for Robust Image ClassificationCode0
Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution AlignmentCode0
CSE: Surface Anomaly Detection with Contrastively Selected EmbeddingCode0
Periodic Intra-Ensemble Knowledge Distillation for Reinforcement LearningCode0
Cross-View Consistency Regularisation for Knowledge DistillationCode0
A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target TrainingCode0
Cross-modal Knowledge Distillation for Vision-to-Sensor Action RecognitionCode0
Cross Modality Knowledge Distillation for Multi-Modal Aerial View Object ClassificationCode0
Unifying Synergies between Self-supervised Learning and Dynamic ComputationCode0
SELF-VS: Self-supervised Encoding Learning For Video SummarizationCode0
TQCompressor: improving tensor decomposition methods in neural networks via permutationsCode0
Technical Report for the 5th CLVision Challenge at CVPR: Addressing the Class-Incremental with Repetition using Unlabeled Data -- 4th Place SolutionCode0
Enhancing Knowledge Distillation for LLMs with Response-Priming PromptingCode0
Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge DistillationCode0
Show:102550
← PrevPage 84 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified