SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 31513200 of 4240 papers

TitleStatusHype
Distilling GANs with Style-Mixed Triplets for X2I Translation with Limited Data0
Distilling Generative-Discriminative Representations for Very Low-Resolution Face Recognition0
Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation0
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression0
Distilling Inter-Class Distance for Semantic Segmentation0
Distilling Invariant Representations with Dual Augmentation0
Distilling Knowledge for Short-to-Long Term Trajectory Prediction0
Distilling Knowledge from CNN-Transformer Models for Enhanced Human Action Recognition0
Distilling Knowledge from Deep Networks with Applications to Healthcare Domain0
Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation0
Distilling Knowledge from Pre-trained Language Models via Text Smoothing0
Distilling Knowledge from Resource Management Algorithms to Neural Networks: A Unified Training Assistance Approach0
Distilling Knowledge into Quantum Vision Transformers for Biomedical Image Classification0
Distilling Large Language Models for Efficient Clinical Information Extraction0
Distilling Missing Modality Knowledge from Ultrasound for Endometriosis Diagnosis with Magnetic Resonance Images0
Distilling Monocular Foundation Model for Fine-grained Depth Completion0
Distilling Multi-Level X-vector Knowledge for Small-footprint Speaker Verification0
Distilling Named Entity Recognition Models for Endangered Species from Large Language Models0
Distilling Normalizing Flows0
Distilling Object Detectors with Task Adaptive Regularization0
Distilling ODE Solvers of Diffusion Models into Smaller Steps0
Distilling Optimal Neural Networks: Rapid Search in Diverse Spaces0
Distilling Pixel-Wise Feature Similarities for Semantic Segmentation0
Distilling portable Generative Adversarial Networks for Image Translation0
Distilling Privileged Multimodal Information for Expression Recognition using Optimal Transport0
Distilling Spatially-Heterogeneous Distortion Perception for Blind Image Quality Assessment0
Distilling Spikes: Knowledge Distillation in Spiking Neural Networks0
Distilling Structured Knowledge for Text-Based Relational Reasoning0
Distilling Temporal Knowledge with Masked Feature Reconstruction for 3D Object Detection0
Distilling Text Style Transfer With Self-Explanation From LLMs0
Distilling the Knowledge in Data Pruning0
Distilling BERT into Simple Neural Networks with Unlabeled Transfer Data0
Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification0
Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models0
DistillSpec: Improving Speculative Decoding via Knowledge Distillation0
Distill-then-prune: An Efficient Compression Framework for Real-time Stereo Matching Network on Edge Devices0
Distill to Delete: Unlearning in Graph Networks with Knowledge Distillation0
Distill-to-Label: Weakly Supervised Instance Labeling Using Knowledge Distillation0
DistillW2V2: A Small and Streaming Wav2vec 2.0 Based ASR Model0
DistPro: Searching A Fast Knowledge Distillation Process via Meta Optimization0
Distributed Learning for Wi-Fi AP Load Prediction0
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images0
Diverse Knowledge Distillation for End-to-End Person Search0
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks0
DLIP: Distilling Language-Image Pre-training0
DL-KDD: Dual-Light Knowledge Distillation for Action Recognition in the Dark0
DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation0
DNA 1.0 Technical Report0
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models0
Does Knowledge Distillation Matter for Large Language Model based Bundle Generation?0
Show:102550
← PrevPage 64 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified