SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 18011850 of 4240 papers

TitleStatusHype
mCLIP: Multilingual CLIP via Cross-lingual TransferCode1
Customizing Synthetic Data for Data-Free Student LearningCode0
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic SegmentationCode1
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series DataCode0
On-Device Constrained Self-Supervised Speech Representation Learning for Keyword Spotting via Knowledge Distillation0
Contextual Affinity Distillation for Image Anomaly Detection0
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
MDViT: Multi-domain Vision Transformer for Small Medical Image Segmentation DatasetsCode1
Distilling Missing Modality Knowledge from Ultrasound for Endometriosis Diagnosis with Magnetic Resonance Images0
KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation0
Review helps learn better: Temporal Supervised Knowledge Distillation0
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Shared Growth of Graph Neural Networks via Prompted Free-direction Knowledge Distillation0
Long-Tailed Continual Learning For Visual Food Recognition0
Quantization Variation: A New Perspective on Training Transformers with Low-Bit PrecisionCode1
Audio Embeddings as Teachers for Music ClassificationCode1
Understanding the Overfitting of the Episodic Meta-training0
Streaming egocentric action anticipation: An evaluation scheme and approach0
NaturalInversion: Data-Free Image Synthesis Improving Real-World ConsistencyCode1
Mitigating Accuracy-Robustness Trade-off via Balanced Multi-Teacher Adversarial DistillationCode1
On information captured by neural networks: connections with memorization and generalizationCode1
A Dimensional Structure based Knowledge Distillation Method for Cross-Modal Learning0
Exploring Dual Model Knowledge Distillation for Anomaly Detection0
Reducing the gap between streaming and non-streaming Transducer-based ASR by adaptive two-stage knowledge distillation0
Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning0
Accelerating Molecular Graph Neural Networks via Knowledge Distillation0
Federated Learning on Non-iid Data via Local and Global Distillation0
Cross Architecture Distillation for Face Recognition0
Feature Adversarial Distillation for Point Cloud Classification0
Enhancing Mapless Trajectory Prediction through Knowledge Distillation0
Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial TrainingCode1
Temporal Action Proposal Generation With Action Frequency Adaptive NetworkCode0
Incorporating Graph Information in Transformer-based AMR ParsingCode0
On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes0
Knowledge Distillation via Token-level Relationship Graph0
Recent Advances in Direct Speech-to-text Translation0
CrossKD: Cross-Head Knowledge Distillation for Object DetectionCode1
FSAR: Federated Skeleton-based Action Recognition with Adaptive Topology Structure and Knowledge Distillation0
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation0
Semi-Supervised Learning for Multi-Label Cardiovascular Diseases Prediction:A Multi-Dataset Study0
Squeezing nnU-Nets with Knowledge Distillation for On-Board Cloud Detection0
Knowledge Distillation for Efficient Audio-Visual Video Captioning0
MixedTeacher : Knowledge Distillation for fast inference textural anomaly detectionCode0
Coaching a Teachable StudentCode1
Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language ModelsCode0
Self-Knowledge Distillation for Surgical Phase Recognition0
Heterogeneous Continual Learning0
MiniLLM: Knowledge Distillation of Large Language ModelsCode2
BPKD: Boundary Privileged Knowledge Distillation For Semantic SegmentationCode1
Enhanced Multimodal Representation Learning with Cross-modal KD0
Show:102550
← PrevPage 37 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified