SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 32513300 of 4240 papers

TitleStatusHype
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation0
CL-ReKD: Cross-lingual Knowledge Distillation for Multilingual Retrieval Question Answering0
Nearest Neighbor Knowledge Distillation for Neural Machine Translation0
Transferring Knowledge from Structure-aware Self-attention Language Model to Sequence-to-Sequence Semantic Parsing0
Tree Knowledge Distillation for Compressing Transformer-Based Language Models0
Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners0
On Exploring Pose Estimation as an Auxiliary Learning Task for Visible-Infrared Person Re-identificationCode0
FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks0
Two-Pass End-to-End ASR Model Compression0
Microdosing: Knowledge Distillation for GAN based Compression0
Class-Incremental Continual Learning into the eXtended DER-verse0
Which Student is Best? A Comprehensive Knowledge Distillation Exam for Task-Specific BERT Models0
Improving Video Model Transfer With Dynamic Representation Learning0
Distillation Using Oracle Queries for Transformer-Based Human-Object Interaction Detection0
Class Similarity Weighted Knowledge Distillation for Continual Semantic Segmentation0
Image Restoration using Feature-guidance0
Performance-Aware Mutual Knowledge Distillation for Improving Neural Architecture Search0
Multi-Objective Diverse Human Motion Prediction With Knowledge Distillation0
Conditional Generative Data-free Knowledge Distillation0
Data-Free Knowledge Transfer: A Survey0
An Efficient Federated Distillation Learning System for Multi-task Time Series Classification0
Automatic Mixed-Precision Quantization Search of BERT0
Online Adversarial Knowledge Distillation for Graph Neural NetworksCode0
Distilling the Knowledge of Romanian BERTs Using Multiple TeachersCode0
Adaptive Beam Search to Enhance On-device Abstractive Summarization0
Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation0
Supervised Graph Contrastive Pretraining for Text Classification0
Multi-Modality Distillation via Learning the teacher's modality-level Gram Matrix0
Controlling the Quality of Distillation in Response-Based Network Compression0
LegoDNN: Block-grained Scaling of Deep Neural Networks for Mobile Vision0
Weakly Supervised Semantic Segmentation via Alternative Self-Dual Teaching0
Distillation of Human-Object Interaction Contexts for Action Recognition0
Knowledge Distillation Improves Stability in Retranslation-based Simultaneous Translation0
Towards Disturbance-Free Visual Mobile ManipulationCode0
Distill and De-bias: Mitigating Bias in Face Verification using Knowledge Distillation0
Amortized Noisy Channel Neural Machine Translation0
Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text0
On the Use of External Data for Spoken Named Entity RecognitionCode0
Improving Sequential Recommendations via Bidirectional Temporal Data Augmentation with Pre-trainingCode0
Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and Adaptation0
Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic SegmentationCode0
Mutual Adversarial Training: Learning together is better than going alone0
Knowledge Distillation for Object Detection via Rank Mimicking and Prediction-guided Feature Imitation0
Boosting Contrastive Learning with Relation Knowledge Distillation0
ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake ImagesCode0
Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge DistillationCode0
CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks0
Safe Distillation Box0
Extracting knowledge from features with multilevel abstraction0
KDCTime: Knowledge Distillation with Calibration on InceptionTime for Time-series Classification0
Show:102550
← PrevPage 66 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified