SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 17511800 of 4240 papers

TitleStatusHype
AMD: Automatic Multi-step Distillation of Large-scale Vision Models0
Understanding the Gains from Repeated Self-Distillation0
Improving Knowledge Distillation in Transfer Learning with Layer-wise Learning Rates0
Fully Fine-tuned CLIP Models are Efficient Few-Shot Learners0
Relative Difficulty Distillation for Semantic SegmentationCode0
DSMix: Distortion-Induced Sensitivity Map Based Pre-training for No-Reference Image Quality AssessmentCode0
MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models0
Accelerated Proton Resonance Frequency-based Magnetic Resonance Thermometry by Optimized Deep Learning MethodCode0
Supporting Cross-language Cross-project Bug Localization Using Pre-trained Language Models0
Edge AI-Enabled Chicken Health Detection Based on Enhanced FCOS-Lite and Knowledge Distillation0
Unified Anomaly Detection methods on Edge Device using Knowledge Distillation and Quantization0
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment0
Adaptive Modality Balanced Online Knowledge Distillation for Brain-Eye-Computer based Dim Object DetectionCode0
Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application0
Self-Cooperation Knowledge Distillation for Novel Class Discovery0
ECAT: A Entire space Continual and Adaptive Transfer Learning Framework for Cross-Domain Recommendation0
Advancing Compressed Video Action Recognition through Progressive Knowledge DistillationCode0
uDistil-Whisper: Label-Free Data Filtering for Knowledge Distillation in Low-Data RegimesCode0
BAPO: Base-Anchored Preference Optimization for Overcoming Forgetting in Large Language Models Personalization0
FANFOLD: Graph Normalizing Flows-driven Asymmetric Network for Unsupervised Graph-Level Anomaly DetectionCode0
Enhancing Accuracy and Parameter-Efficiency of Neural Representations for Network Parameterization0
Direct Preference Knowledge Distillation for Large Language Models0
MuGSI: Distilling GNNs with Multi-Granularity Structural Information for Graph ClassificationCode0
Instance Temperature Knowledge DistillationCode0
Aligning Teacher with Student Preferences for Tailored Training Data Generation0
On Reducing Activity with Distillation and Regularization for Energy Efficient Spiking Neural Networks0
Sequential Editing for Lifelong Training of Speech Recognition Models0
Towards Optimal Trade-offs in Knowledge Distillation for CNNs and Vision Transformers at the Edge0
Preserving Node Distinctness in Graph Autoencoders via Similarity Distillation0
WAVE: Weight Template for Adaptive Initialization of Variable-sized Models0
Knowledge Distillation in Automated Annotation: Supervised Text Classification with LLM-Generated Training Labels0
Highly Constrained Coded Aperture Imaging Systems Design Via a Knowledge Distillation Approach0
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation0
Leveraging Knowledge Distillation for Lightweight Skin Cancer Classification: Balancing Accuracy and Computational Efficiency0
Exploring compressibility of transformer based text-to-music (TTM) models0
Enhancing OOD Detection Using Latent DiffusionCode0
The Privileged Students: On the Value of Initialization in Multilingual Knowledge Distillation0
Continual Learning with Diffusion-based Generative Replay for Industrial Streaming Data0
Fair Text to Medical Image Diffusion Model with Subgroup Distribution Aligned Tuning0
Reinforced Knowledge Distillation for Time Series RegressionCode0
Failure-Resilient Distributed Inference with Model Compression over Heterogeneous Edge Devices0
Factual Dialogue Summarization via Learning from Large Language Models0
SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shots0
Apprenticeship-Inspired Elegance: Synergistic Knowledge Distillation Empowers Spiking Neural Networks for Efficient Single-Eye Emotion Recognition0
Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge DistillationCode0
WaterMono: Teacher-Guided Anomaly Masking and Enhancement Boosting for Robust Underwater Self-Supervised Monocular Depth EstimationCode0
Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?0
Federated Learning with a Single Shared ImageCode0
Enhancing Single-Slice Segmentation with 3D-to-2D Unpaired Scan Distillation0
Vernacular? I Barely Know Her: Challenges with Style Control and Stereotyping0
Show:102550
← PrevPage 36 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified