SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 36013650 of 4240 papers

TitleStatusHype
Fair Feature Distillation for Visual Recognition0
How Does Distilled Data Complexity Impact the Quality and Confidence of Non-Autoregressive Machine Translation?0
KnowSR: Knowledge Sharing among Homogeneous Agents in Multi-agent Reinforcement Learning0
Real-time Monocular Depth Estimation with Sparse Supervision on Mobile0
Experimenting with Knowledge Distillation techniques for performing Brain Tumor Segmentation0
AirNet: Neural Network Transmission over the Air0
Revisiting Knowledge Distillation for Object Detection0
Inplace knowledge distillation with teacher assistant for improved training of flexible deep neural networks0
Weakly Supervised Dense Video Captioning via Jointly Usage of Knowledge Distillation and Cross-modal Matching0
Class-Incremental Few-Shot Object Detection0
Stacked Acoustic-and-Textual Encoding: Integrating the Pre-trained Models into Speech Translation Encoders0
KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation0
Test-Time Adaptation Toward Personalized Speech Enhancement: Zero-Shot Learning with Knowledge Distillation0
Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing Regressions In NLP Model Updates0
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack0
A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts0
Knowledge Distillation for Swedish NER models: A Search for Performance and Efficiency0
Contrastive Conditioning for Assessing Disambiguation in MT: A Case Study of Distilled BiasCode0
Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation0
Distilling EEG Representations via Capsules for Affective Computing0
LIDAR and Position-Aided mmWave Beam Selection with Non-local CNNs and Curriculum TrainingCode0
Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer0
Interpretable Embedding Procedure Knowledge Transfer via Stacked Principal Component Analysis and Graph Neural NetworkCode0
Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification0
Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation0
Relational Subsets Knowledge Distillation for Long-tailed Retinal Diseases Recognition0
Orderly Dual-Teacher Knowledge Distillation for Lightweight Human Pose Estimation0
Brittle Features May Help Anomaly Detection0
Knowledge Distillation as Semiparametric InferenceCode0
EduPal leaves no professor behind: Supporting faculty via a peer-powered recommender system0
Compact CNN Structure Learning by Knowledge Distillation0
Continual Learning for Fake Audio Detection0
Integration of Pre-trained Networks with Continuous Token Interface for End-to-End Spoken Language Understanding0
Unsupervised Continual Learning Via Pseudo Labels0
The Curious Case of Hallucinations in Neural Machine TranslationCode0
Sentence Embeddings by Ensemble Distillation0
Annealing Knowledge DistillationCode0
Dealing with Missing Modalities in the Visual Question Answer-Difference Prediction Task through Knowledge Distillation0
Source and Target Bidirectional Knowledge Distillation for End-to-end Speech Translation0
RankDistil: Knowledge Distillation for Ranking0
CXR Segmentation by AdaIN-based Domain Adaptation and Knowledge DistillationCode0
Dual Discriminator Adversarial Distillation for Data-free Model Compression0
Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis0
Towards Enabling Meta-Learning from Target ModelsCode0
GKD: Semi-supervised Graph Knowledge Distillation for Graph-Independent InferenceCode0
Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and RegressionCode0
Compressing Visual-linguistic Model via Knowledge Distillation0
Knowledge Distillation For Wireless Edge LearningCode0
Students are the Best Teacher: Exit-Ensemble Distillation with Multi-ExitsCode0
Dialect Identification through Adversarial Learning and Knowledge Distillation on Romanian BERT0
Show:102550
← PrevPage 73 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified