SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 40514100 of 4240 papers

TitleStatusHype
Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation0
Low Resource Causal Event Detection from Biomedical Literature0
Low-resource Low-footprint Wake-word Detection using Knowledge Distillation0
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding0
LRSpeech: Extremely Low-Resource Speech Synthesis and Recognition0
LTD: Low Temperature Distillation for Robust Adversarial Training0
M2KD: Multi-model and Multi-level Knowledge Distillation for Incremental Learning0
MadEye: Boosting Live Video Analytics Accuracy with Adaptive Camera Configurations0
Making Neural Machine Reading Comprehension Faster0
Making Small Language Models Better Few-Shot Learners0
Mamba base PKD for efficient knowledge compression0
MambaLiteSR: Image Super-Resolution with Low-Rank Mamba using Knowledge Distillation0
Many-to-One Knowledge Distillation of Real-Time Epileptic Seizure Detection for Low-Power Wearable Internet of Things Systems0
MapDistill: Boosting Efficient Camera-based HD Map Construction via Camera-LiDAR Fusion Model Distillation0
Map-Free Trajectory Prediction with Map Distillation and Hierarchical Encoding0
Marine Saliency Segmenter: Object-Focused Conditional Diffusion with Region-Level Semantic Knowledge Distillation0
Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management0
Masked Autoencoders Are Stronger Knowledge Distillers0
The Role of Masking for Efficient Supervised Knowledge Distillation of Vision Transformers0
Masked Modeling Duo for Speech: Specializing General-Purpose Audio Representation to Speech using Denoising Distillation0
Matching Distributions between Model and Data: Cross-domain Knowledge Distillation for Unsupervised Domain Adaptation0
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function0
Maximum Likelihood Distillation for Robust Modulation Classification0
MCF-VC: Mitigate Catastrophic Forgetting in Class-Incremental Learning for Multimodal Video Captioning0
Enhancing Metaphor Detection through Soft Labels and Target Word Prediction0
Measuring and Reducing Model Update Regression in Structured Prediction for NLP0
Medical Image Segmentation on MRI Images with Missing Modalities: A Review0
MEDIC: Remove Model Backdoors via Importance Driven Cloning0
MedMAP: Promoting Incomplete Multi-modal Brain Tumor Segmentation with Alignment0
MED-TEX: Transferring and Explaining Knowledge with Less Data from Pretrained Medical Imaging Models0
Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation0
MentalMAC: Enhancing Large Language Models for Detecting Mental Manipulation via Multi-Task Anti-Curriculum Distillation0
MergeDistill: Merging Pre-trained Language Models using Distillation0
MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities0
MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down Distillation0
Meta-Ensemble Parameter Learning0
Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains0
Meta Knowledge Distillation0
Meta-Learning across Meta-Tasks for Few-Shot Learning0
MetaMixer: A Regularization Strategy for Online Knowledge Distillation0
MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysis0
MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members0
MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression0
Microdosing: Knowledge Distillation for GAN based Compression0
Microsoft Research Asia's Systems for WMT190
MIKO: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery0
Mimic and Conquer: Heterogeneous Tree Structure Distillation for Syntactic NLP0
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks0
Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to Probe the Boundaries of Stable Diffusion Generated Data0
Mind the Gap: Promoting Missing Modality Brain Tumor Segmentation with Alignment0
Show:102550
← PrevPage 82 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified