SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 41014150 of 4240 papers

TitleStatusHype
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner0
Mini-ResEmoteNet: Leveraging Knowledge Distillation for Human-Centered Design0
MiniVLN: Efficient Vision-and-Language Navigation by Progressive Knowledge Distillation0
MinT: Boosting Generalization in Mathematical Reasoning via Multi-View Fine-Tuning0
Mitigating Cross-client GANs-based Attack in Federated Learning0
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal0
Mitigating Hallucination with ZeroG: An Advanced Knowledge Management Engine0
Mixed Distillation Helps Smaller Language Model Better Reasoning0
Mixed-Type Wafer Classification For Low Memory Devices Using Knowledge Distillation0
MixKD: Towards Efficient Distillation of Large-scale Language Models0
A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization: Data Augmentation and Semi-supervised Approaches0
MKF-ADS: Multi-Knowledge Fusion Based Self-supervised Anomaly Detection System for Control Area Network0
MK-SGN: A Spiking Graph Convolutional Network with Multimodal Fusion and Knowledge Distillation for Skeleton-based Action Recognition0
MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models0
Multimodal Matching-aware Co-attention Networks with Mutual Knowledge Distillation for Fake News Detection0
MOBA: Multi-teacher Model Based Reinforcement Learning0
MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation0
Modality-Inconsistent Continual Learning of Multimodal Large Language Models0
ModalityMirror: Improving Audio Classification in Modality Heterogeneity Federated Learning with Multimodal Distillation0
MSD: Saliency-aware Knowledge Distillation for Multimodal Understanding0
Modality-specific Distillation0
Model-Agnostic Decentralized Collaborative Learning for On-Device POI Recommendation0
Model Compression and Efficient Inference for Large Language Models: A Survey0
Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography0
Model Compression for Resource-Constrained Mobile Robots0
Model Compression Methods for YOLOv5: A Review0
Model compression using knowledge distillation with integrated gradients0
Model Compression Using Optimal Transport0
Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System0
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System0
Model Distillation for Faithful Explanations of Medical Code Predictions0
Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification0
On Cross-Layer Alignment for Model Fusion of Heterogeneous Neural Networks0
A Light-weight Deep Human Activity Recognition Algorithm Using Multi-knowledge Distillation0
Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation0
Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples0
Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation0
Model Stitching by Functional Latent Alignment0
Modifying Final Splits of Classification Tree for Fine-tuning Subpopulation Target in Policy Making0
Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference0
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation0
MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router0
MoKD: Multi-Task Optimization for Knowledge Distillation0
MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation0
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation0
Mono2Stereo: Monocular Knowledge Transfer for Enhanced Stereo Matching0
More From Less: Self-Supervised Knowledge Distillation for Routine Histopathology Data0
Motion Pyramid Networks for Accurate and Efficient Cardiac Motion Estimation0
MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders0
MS-KD: Multi-Organ Segmentation with Multiple Binary-Labeled Datasets0
Show:102550
← PrevPage 83 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified