SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 25012550 of 4240 papers

TitleStatusHype
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias0
What is Lost in Knowledge Distillation?0
What Knowledge Gets Distilled in Knowledge Distillation?0
What Makes a Good Dataset for Knowledge Distillation?0
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation0
When Gradient Descent Meets Derivative-Free Optimization: A Match Made in Black-Box Scenario0
Which Student is Best? A Comprehensive Knowledge Distillation Exam for Task-Specific BERT Models0
DQ-Whisper: Joint Distillation and Quantization for Efficient Multilingual Speech Recognition0
Whole-Slide Mitosis Detection in H&E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks0
Why distillation helps: a statistical perspective0
Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate from the Perspective of DistilBERT0
Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation0
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents0
Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network Pruning0
Wired Perspectives: Multi-View Wire Art Embraces Generative AI0
Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model0
WK-Pnet: FM-Based Positioning via Wavelet Packet Decomposition and Knowledge Distillation0
Word Sense Induction with Knowledge Distillation from BERT0
X^3KD: Knowledge Distillation Across Modalities, Tasks and Stages for Multi-Camera 3D Object Detection0
X3KD: Knowledge Distillation Across Modalities, Tasks and Stages for Multi-Camera 3D Object Detection0
XCOMPS: A Multilingual Benchmark of Conceptual Minimal Pairs0
XD: Cross-lingual Knowledge Distillation for Polyglot Sentence Embeddings0
X-Distill: Improving Self-Supervised Monocular Depth via Cross-Task Distillation0
Xiaomi's Submissions for IWSLT 2020 Open Domain Translation Task0
X Modality Assisting RGBT Object Tracking0
xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation0
Yield Evaluation of Citrus Fruits based on the YoloV5 compressed by Knowledge Distillation0
YOLO in the Dark - Domain Adaptation Method for Merging Multiple Models -0
You Can Have Your Data and Balance It Too: Towards Balanced and Efficient Multilingual Models0
You Do Not Need Additional Priors or Regularizers in Retinex-Based Low-Light Image Enhancement0
Zero shot framework for satellite image restoration0
Zero-shot Slot Filling in the Age of LLMs for Dialogue Systems0
Zoom-shot: Fast and Efficient Unsupervised Zero-Shot Transfer of CLIP to Vision Encoders with Multimodal Loss0
Deep Face Recognition Model Compression via Knowledge Transfer and Distillation0
1st Place Solution to the EPIC-Kitchens Action Anticipation Challenge 20220
AdvFunMatch: When Consistent Teaching Meets Adversarial Robustness0
VIPeR: Visual Incremental Place Recognition with Adaptive Mining and Continual Learning0
Learning Effective Representations for Retrieval Using Self-Distillation with Adaptive Relevance Margins0
Dynamic Object Queries for Transformer-based Incremental Object Detection0
Gemma 2: Improving Open Language Models at a Practical Size0
StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization0
Sentence-wise Speech Summarization: Task, Datasets, and End-to-End Modeling with LM Knowledge Distillation0
DistillGrasp: Integrating Features Correlation with Knowledge Distillation for Depth Completion of Transparent Objects0
An approach to optimize inference of the DIART speaker diarization pipeline0
VizECGNet: Visual ECG Image Network for Cardiovascular Diseases Classification with Multi-Modal Training and Knowledge Distillation0
Inference Optimizations for Large Language Models: Effects, Challenges, and Practical Considerations0
On Importance of Pruning and Distillation for Efficient Low Resource NLP0
ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data0
VRM: Knowledge Distillation via Virtual Relation Matching0
Advancing Multiple Instance Learning with Continual Learning for Whole Slide Imaging0
Show:102550
← PrevPage 51 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified