SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 24012450 of 4240 papers

TitleStatusHype
Leveraging Expert Models for Training Deep Neural Networks in Scarce Data Domains: Application to Offline Handwritten Signature Verification0
A vision transformer-based framework for knowledge transfer from multi-modal to mono-modal lymphoma subtyping models0
Three Factors to Improve Out-of-Distribution Detection0
Spatio-Temporal Branching for Motion Prediction using Motion IncrementsCode0
Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search0
Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video Quality Assessment0
Subspace Distillation for Continual LearningCode0
Sampling to Distill: Knowledge Transfer from Open-World Data0
Federated Learning for Data and Model Heterogeneity in Medical Imaging0
Can Self-Supervised Representation Learning Methods Withstand Distribution Shifts and Corruptions?Code0
UPFL: Unsupervised Personalized Federated Learning towards New ClientsCode0
Incrementally-Computable Neural Networks: Efficient Inference for Dynamic Inputs0
Mitigating Cross-client GANs-based Attack in Federated Learning0
A Good Student is Cooperative and Reliable: CNN-Transformer Collaborative Learning for Semantic Segmentation0
HeteFedRec: Federated Recommender Systems with Model Heterogeneity0
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images0
Model Compression Methods for YOLOv5: A Review0
Quantized Feature Distillation for Network Quantization0
Cluster-aware Semi-supervised Learning: Relational Knowledge Distillation Provably Learns ClusteringCode0
LightPath: Lightweight and Scalable Path Representation LearningCode0
Teach model to answer questions after comprehending the document0
Knowledge Distillation for Object Detection: from generic to remote sensing datasets0
Improving End-to-End Speech Translation by Imitation-Based Knowledge Distillation with Synthetic TranscriptsCode0
Domain Knowledge Distillation from Large Language Model: An Empirical Study in the Autonomous Driving Domain0
Cross-Lingual NER for Financial Transaction Data in Low-Resource Languages0
MinT: Boosting Generalization in Mathematical Reasoning via Multi-View Fine-Tuning0
A Survey of Techniques for Optimizing Transformer Inference0
Intuitive Access to Smartphone Settings Using Relevance Model Trained by Contrastive Learning0
SoccerKDNet: A Knowledge Distillation Framework for Action Recognition in Soccer Videos0
DreamTeacher: Pretraining Image Backbones with Deep Generative Models0
Regression-Oriented Knowledge Distillation for Lightweight Ship Orientation Angle Prediction with Optical Remote Sensing ImagesCode0
Frameless Graph Knowledge DistillationCode0
A metric learning approach for endoscopic kidney stone identification0
The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework0
Customizing Synthetic Data for Data-Free Student LearningCode0
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series DataCode0
On-Device Constrained Self-Supervised Speech Representation Learning for Keyword Spotting via Knowledge Distillation0
Contextual Affinity Distillation for Image Anomaly Detection0
Distilling Missing Modality Knowledge from Ultrasound for Endometriosis Diagnosis with Magnetic Resonance Images0
KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation0
Review helps learn better: Temporal Supervised Knowledge Distillation0
Shared Growth of Graph Neural Networks via Prompted Free-direction Knowledge Distillation0
Long-Tailed Continual Learning For Visual Food Recognition0
Streaming egocentric action anticipation: An evaluation scheme and approach0
Understanding the Overfitting of the Episodic Meta-training0
A Dimensional Structure based Knowledge Distillation Method for Cross-Modal Learning0
Exploring Dual Model Knowledge Distillation for Anomaly Detection0
Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning0
Reducing the gap between streaming and non-streaming Transducer-based ASR by adaptive two-stage knowledge distillation0
Accelerating Molecular Graph Neural Networks via Knowledge Distillation0
Show:102550
← PrevPage 49 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified