SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 32013250 of 4240 papers

TitleStatusHype
Local Correlation Consistency for Knowledge Distillation0
LoCa: Logit Calibration for Knowledge Distillation0
Locally Linear Region Knowledge Distillation0
Local-Selective Feature Distillation for Single Image Super-Resolution0
Local-to-Global Self-Supervised Representation Learning for Diabetic Retinopathy Grading0
Local vs. Global: Local Land-Use and Land-Cover Models Deliver Higher Quality Maps0
Logic Distillation: Learning from Code Function by Function for Planning and Decision-making0
Logits Poisoning Attack in Federated Distillation0
LokiLM: Technical Report0
Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning0
Long-Range Zero-Shot Generative Deep Network Quantization0
Long-Tailed Continual Learning For Visual Food Recognition0
Long-tailed Food Classification0
Hierarchical Knowledge Guided Learning for Real-world Retinal Diseases Recognition0
Long-Tailed Question Answering in an Open World0
Long-Term Vehicle Localization by Recursive Knowledge Distillation0
LookALike: Human Mimicry based collaborative decision making0
Look Backward and Forward: Self-Knowledge Distillation with Bidirectional Decoder for Neural Machine Translation0
Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition0
Lost in Distillation: A Case Study in Toxicity Modeling0
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer0
Low-Dimensional Federated Knowledge Graph Embedding via Knowledge Distillation0
Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network0
Low-Resolution Chest X-ray Classification via Knowledge Distillation and Multi-task Learning0
Low-resolution Face Recognition in the Wild via Selective Knowledge Distillation0
Low-Resolution Face Recognition via Adaptable Instance-Relation Distillation0
Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation0
Low Resource Causal Event Detection from Biomedical Literature0
Low-resource Low-footprint Wake-word Detection using Knowledge Distillation0
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding0
LRSpeech: Extremely Low-Resource Speech Synthesis and Recognition0
LTD: Low Temperature Distillation for Robust Adversarial Training0
M2KD: Multi-model and Multi-level Knowledge Distillation for Incremental Learning0
MadEye: Boosting Live Video Analytics Accuracy with Adaptive Camera Configurations0
Making Neural Machine Reading Comprehension Faster0
Making Small Language Models Better Few-Shot Learners0
Mamba base PKD for efficient knowledge compression0
MambaLiteSR: Image Super-Resolution with Low-Rank Mamba using Knowledge Distillation0
Many-to-One Knowledge Distillation of Real-Time Epileptic Seizure Detection for Low-Power Wearable Internet of Things Systems0
MapDistill: Boosting Efficient Camera-based HD Map Construction via Camera-LiDAR Fusion Model Distillation0
Map-Free Trajectory Prediction with Map Distillation and Hierarchical Encoding0
Marine Saliency Segmenter: Object-Focused Conditional Diffusion with Region-Level Semantic Knowledge Distillation0
Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management0
Masked Autoencoders Are Stronger Knowledge Distillers0
The Role of Masking for Efficient Supervised Knowledge Distillation of Vision Transformers0
Masked Modeling Duo for Speech: Specializing General-Purpose Audio Representation to Speech using Denoising Distillation0
Matching Distributions between Model and Data: Cross-domain Knowledge Distillation for Unsupervised Domain Adaptation0
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function0
Maximum Likelihood Distillation for Robust Modulation Classification0
MCF-VC: Mitigate Catastrophic Forgetting in Class-Incremental Learning for Multimodal Video Captioning0
Show:102550
← PrevPage 65 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified