SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 16511700 of 4240 papers

TitleStatusHype
Facilitating NSFW Text Detection in Open-Domain Dialogue Systems via Knowledge DistillationCode0
Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation0
DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery CluesCode1
Heterogeneous Generative Knowledge Distillation with Masked Image Modeling0
FDCNet: Feature Drift Compensation Network for Class-Incremental Weakly Supervised Object LocalizationCode1
UNIDEAL: Curriculum Knowledge Distillation Federated Learning0
One-Class Knowledge Distillation for Spoofing Speech Detection0
Privacy-preserving Early Detection of Epileptic Seizures in VideosCode0
Cross-lingual Knowledge Distillation via Flow-based Voice Conversion for Robust Polyglot Text-To-Speech0
Two-Step Knowledge Distillation for Tiny Speech Enhancement0
Adaptive Prompt Learning with Distilled Connective Knowledge for Implicit Discourse Relation RecognitionCode0
ChromaDistill: Colorizing Monochrome Radiance Fields with Knowledge Distillation0
CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders0
A Novel Local-Global Feature Fusion Framework for Body-weight Exercise Recognition with Pressure Mapping Sensors0
Continual Learning with Dirichlet Generative-based Rehearsal0
Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection0
KD-FixMatch: Knowledge Distillation Siamese Neural Networks0
DeViT: Decomposing Vision Transformers for Collaborative Inference in Edge Devices0
DAD++: Improved Data-free Test Time Adversarial DefenseCode0
Exploiting CLIP for Zero-shot HOI Detection Requires Knowledge Distillation at Multiple LevelsCode0
Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations0
Decoding visual brain representations from electroencephalography through Knowledge Distillation and latent diffusion modelsCode0
Knowledge Distillation-Empowered Digital Twin for Anomaly Detection0
Towards Mitigating Architecture Overfitting on Distilled DatasetsCode0
3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation0
Towards Comparable Knowledge Distillation in Semantic Image Segmentation0
Leveraging ASR Pretrained Conformers for Speaker Verification through Transfer Learning and Knowledge Distillation0
Knowledge Distillation Layer that Lets the Student DecideCode0
DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation0
Rethinking Momentum Knowledge Distillation in Online Continual LearningCode1
A deep Natural Language Inference predictor without language-specific training data0
Fast and High-Performance Learned Image Compression With Improved Checkerboard Context Model, Deformable Residual Module, and Knowledge Distillation0
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models0
Probabilistic Self-supervised Learning via Scoring Rules Minimization0
A survey on efficient vision transformers: algorithms, techniques, and performance benchmarking0
On the Query Strategies for Efficient Online Active Distillation0
Prior Knowledge Guided Network for Video Anomaly Detection0
COMEDIAN: Self-Supervised Learning and Knowledge Distillation for Action Spotting using TransformersCode1
Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer0
Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff0
MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image AnalysisCode0
Towards Long-Tailed Recognition for Graph Classification via Collaborative Experts0
Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection0
SpikeBERT: A Language Spikformer Learned from BERT with Knowledge DistillationCode1
SynthDistill: Face Recognition with Knowledge Distillation from Synthetic DataCode0
Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object DetectionCode1
Distilled GPT for Source Code SummarizationCode0
Boosting Residual Networks with Group KnowledgeCode0
DM-VTON: Distilled Mobile Real-time Virtual Try-OnCode1
Improving Knowledge Distillation for BERT Models: Loss Functions, Mapping Methods, and Weight Tuning0
Show:102550
← PrevPage 34 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified