SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 23512400 of 4240 papers

TitleStatusHype
Towards Comparable Knowledge Distillation in Semantic Image Segmentation0
Leveraging ASR Pretrained Conformers for Speaker Verification through Transfer Learning and Knowledge Distillation0
A deep Natural Language Inference predictor without language-specific training data0
DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation0
Knowledge Distillation Layer that Lets the Student DecideCode0
Probabilistic Self-supervised Learning via Scoring Rules Minimization0
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models0
Fast and High-Performance Learned Image Compression With Improved Checkerboard Context Model, Deformable Residual Module, and Knowledge Distillation0
A survey on efficient vision transformers: algorithms, techniques, and performance benchmarking0
On the Query Strategies for Efficient Online Active Distillation0
Prior Knowledge Guided Network for Video Anomaly Detection0
Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer0
Towards Long-Tailed Recognition for Graph Classification via Collaborative Experts0
MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image AnalysisCode0
Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff0
Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection0
Distilled GPT for Source Code SummarizationCode0
SynthDistill: Face Recognition with Knowledge Distillation from Synthetic DataCode0
Boosting Residual Networks with Group KnowledgeCode0
Improving Knowledge Distillation for BERT Models: Loss Functions, Mapping Methods, and Weight Tuning0
REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments0
Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features0
3D Face Alignment Through Fusion of Head Pose Information and Features0
Fall Detection using Knowledge Distillation Based Long short-term memory for Offline Embedded and Low Power Devices0
DLIP: Distilling Language-Image Pre-training0
Efficient Controllable Multi-Task Architectures0
Multimodal Locally Enhanced Transformer for Continuous Sign Language Recognition0
Representation Disparity-aware Distillation for 3D Object Detection0
Unlimited Knowledge Distillation for Action Recognition in the Dark0
CCFace: Classification Consistency for Low-Resolution Face Recognition0
Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation0
Learning Through Guidance: Knowledge Distillation for Endoscopic Image Classification0
Radio2Text: Streaming Speech Recognition Using mmWave Radio Signals0
SkinDistilViT: Lightweight Vision Transformer for Skin Lesion ClassificationCode0
A Survey on Model Compression for Large Language Models0
Distilling Knowledge from Resource Management Algorithms to Neural Networks: A Unified Training Assistance Approach0
Complex Facial Expression Recognition Using Deep Knowledge Distillation of Basic FeaturesCode0
Continual Face Forgery Detection via Historical Distribution Preserving0
Towards General and Fast Video Derain via Knowledge Distillation0
Sci-CoT: Leveraging Large Language Models for Enhanced Knowledge Distillation in Small Models for Scientific QA0
FPGA Resource-aware Structured Pruning for Real-Time Neural Networks0
Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge DistillationCode0
Teacher-Student Architecture for Knowledge Distillation: A Survey0
Efficient Temporal Sentence Grounding in Videos with Multi-Teacher Knowledge DistillationCode0
Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization0
Few-shot Class-Incremental Semantic Segmentation via Pseudo-Labeling and Knowledge DistillationCode0
Class Incremental Learning with Self-Supervised Pre-Training and Prototype Learning0
Scene-aware Human Pose Generation using Transformer0
Eyelid’s Intrinsic Motion-aware Feature Learning for Real-time Eyeblink Detection in the WildCode0
Improved Knowledge Distillation for Crowd Counting on IoT DeviceCode0
Show:102550
← PrevPage 48 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified