SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 27512800 of 4240 papers

TitleStatusHype
Rethinking Feature-Based Knowledge Distillation for Face Recognition0
CLIPPING: Distilling CLIP-Based Models With a Student Base for Video-Language Retrieval0
Distilling Focal Knowledge From Imperfect Expert for 3D Object DetectionCode0
ScaleKD: Distilling Scale-Aware Knowledge in Small Object Detector0
Active Exploration of Multimodal Complementarity for Few-Shot Action Recognition0
MEDIC: Remove Model Backdoors via Importance Driven Cloning0
You Do Not Need Additional Priors or Regularizers in Retinex-Based Low-Light Image Enhancement0
Automated Knowledge Distillation via Monte Carlo Tree SearchCode0
Distilling Cross-Temporal Contexts for Continuous Sign Language Recognition0
DaFKD: Domain-Aware Federated Knowledge Distillation0
TripLe: Revisiting Pretrained Model Reuse and Progressive Learning for Efficient Vision Transformer Scaling and Searching0
ICD-Face: Intra-class Compactness Distillation for Face Recognition0
Knowledge-Spreader: Learning Semi-Supervised Facial Action Dynamics by Consistifying Knowledge Granularity0
Beyond the Limitation of Monocular 3D Detector via Knowledge DistillationCode0
SMOC-Net: Leveraging Camera Pose for Self-Supervised Monocular Object Pose Estimation0
Tiny Updater: Towards Efficient Neural Network-Driven Software UpdatingCode0
Continual Segment: Towards a Single, Unified and Non-forgetting Continual Segmentation Model of 143 Whole-body Organs in CT Scans0
Alleviating Catastrophic Forgetting of Incremental Object Detection via Within-Class and Between-Class Knowledge Distillation0
Multi-Task Learning with Knowledge Distillation for Dense Prediction0
Incrementer: Transformer for Class-Incremental Semantic Segmentation With Knowledge Distillation Focusing on Old Class0
Masked Autoencoders Are Stronger Knowledge Distillers0
Endpoints Weight Fusion for Class Incremental Semantic Segmentation0
X3KD: Knowledge Distillation Across Modalities, Tasks and Stages for Multi-Camera 3D Object Detection0
Bilateral Memory Consolidation for Continual Learning0
FedICT: Federated Multi-task Distillation for Multi-access Edge ComputingCode0
Probabilistic Knowledge Distillation of Face Ensembles0
Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation0
A Unified Object Counting Network with Object Occupation PriorCode0
Prototype-guided Cross-task Knowledge Distillation for Large-scale ModelsCode0
BD-KD: Balancing the Divergences for Online Knowledge Distillation0
CAMeMBERT: Cascading Assistant-Mediated Multilingual BERT0
UNIKD: UNcertainty-filtered Incremental Knowledge Distillation for Neural Implicit RepresentationCode0
RangeAugment: Efficient Online Augmentation with Range Learning0
Diffusion Glancing Transformer for Parallel Sequence to Sequence Learning0
Fine-Grained Distillation for Long Document Retrieval0
Adam: Dense Retrieval Distillation with Adaptive Dark Examples0
Multi-View Knowledge Distillation from Crowd Annotations for Out-of-Domain Generalization0
I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation0
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales0
Continual Knowledge Distillation for Neural Machine TranslationCode0
3D Point Cloud Pre-training with Knowledge Distillation from 2D Images0
Teaching Small Language Models to Reason0
Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework0
Hybrid Paradigm-based Brain-Computer Interface for Robotic Arm Control0
Domain Adaptation for Dense Retrieval through Self-Supervision by Pseudo-Relevance Labeling0
Multimodal Matching-aware Co-attention Networks with Mutual Knowledge Distillation for Fake News Detection0
Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging0
Siamese Sleep Transformer For Robust Sleep Stage Scoring With Self-knowledge Distillation and Selective Batch Sampling0
Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization0
Teaching What You Should Teach: A Data-Based Distillation Method0
Show:102550
← PrevPage 56 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified