SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 31013150 of 4240 papers

TitleStatusHype
"Teaching Independent Parts Separately" (TIPSy-GAN) : Improving Accuracy and Stability in Unsupervised Adversarial 2D to 3D Pose Estimation0
D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image Generation0
Knowledge Distillation for Multi-Target Domain Adaptation in Real-Time Person Re-IdentificationCode0
Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning0
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks0
Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks0
Distilling Inter-Class Distance for Semantic Segmentation0
ConceptDistil: Model-Agnostic Distillation of Concept Explanations0
Collective Relevance Labeling for Passage RetrievalCode0
A Deep Reinforcement Learning Framework for Rapid Diagnosis of Whole Slide Pathological Images0
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural NetworksCode0
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems0
FedSPLIT: One-Shot Federated Recommendation System Based on Non-negative Joint Matrix Factorization and Knowledge Distillation0
Knowledge Distillation of Russian Language Models with Reduction of VocabularyCode0
Attention-based Knowledge Distillation in Multi-attention Tasks: The Impact of a DCT-driven Loss0
Generalized Knowledge Distillation via Relationship MatchingCode0
FedDKD: Federated Learning with Decentralized Knowledge Distillation0
Multi-Granularity Structural Knowledge Distillation for Language Model CompressionCode0
Low Resource Causal Event Detection from Biomedical Literature0
An Unsupervised Multiple-Task and Multiple-Teacher Model for Cross-lingual Named Entity Recognition0
Pretrained Speech Encoders and Efficient Fine-tuning Methods for Speech Translation: UPC at IWSLT 2022Code0
Domain-specific knowledge distillation yields smaller and better models for conversational commerce0
The Xiaomi Text-to-Text Simultaneous Speech Translation System for IWSLT 20220
Knowledge Distillation Meets Few-Shot Learning: An Approach for Few-Shot Intent Classification Within and Across Domains0
Domain Knowledge Transferring for Pre-trained Language Model via Calibrated Activation Boundary DistillationCode0
Model Distillation for Faithful Explanations of Medical Code Predictions0
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive TranslationCode0
CMU’s IWSLT 2022 Dialect Speech Translation System0
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing0
Multiple Degradation and Reconstruction Network for Single Image Denoising via Knowledge Distillation0
Human-Centered Prior-Guided and Task-Dependent Multi-Task Representation Learning for Action Recognition Pre-Training0
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers0
Transfer Learning with Pre-trained Conditional Generative Models0
One-shot Federated Learning without Server-side TrainingCode0
Improving Feature Generalizability with Multitask Learning in Class Incremental Learning0
Selective Cross-Task Distillation0
Joint Feature Distribution Alignment Learning for NIR-VIS and VIS-VIS Face Recognition0
Revisiting Graph based Social Recommendation: A Distillation Enhanced Social Graph Network0
Boosting Pruned Networks with Linear Over-parameterization0
Unseen Object Instance Segmentation with Fully Test-time RGB-D Embeddings Adaptation0
Learning to Purification for Unsupervised Person Re-identification0
HRPose: Real-Time High-Resolution 6D Pose Estimation Network Using Knowledge Distillation0
Multi-Modal Few-Shot Object Detection with Meta-Learning-Based Cross-Modal Prompting0
CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation0
Ensemble diverse hypotheses and knowledge distillation for unsupervised cross-subject adaptationCode0
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly Supervised Object Detection0
Impossible Triangle: What's Next for Pre-trained Language Models?0
DistPro: Searching A Fast Knowledge Distillation Process via Meta Optimization0
CoupleFace: Relation Matters for Face Recognition Distillation0
Towards On-Board Panoptic Segmentation of Multispectral Satellite Images0
Show:102550
← PrevPage 63 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified