SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 201250 of 4240 papers

TitleStatusHype
KD^2M: An unifying framework for feature knowledge distillation0
Style over Substance: Distilled Language Models Reason Via Stylistic Replication0
A Novel Approach To Implementing Knowledge Distillation In Tsetlin Machines0
OccludeNeRF: Geometric-aware 3D Scene Inpainting with Collaborative Score Distillation in NeRF0
Global Intervention and Distillation for Federated Out-of-Distribution Generalization0
Adversarial Curriculum Graph-Free Knowledge Distillation for Graph Neural Networks0
Is LLM the Silver Bullet to Low-Resource Languages Machine Translation?0
Unimodal-driven Distillation in Multimodal Emotion Recognition with Dynamic Fusion0
Crossmodal Knowledge Distillation with WordNet-Relaxed Text Embeddings for Robust Image Classification0
A Plasticity-Aware Method for Continual Self-Supervised Learning in Remote Sensing0
Multi-modal Knowledge Distillation-based Human Trajectory ForecastingCode1
Efficient Verified Machine Unlearning For Distillation0
Intrinsic Image Decomposition for Robust Self-supervised Monocular Depth Estimation on Reflective Surfaces0
Delving Deep into Semantic Relation Distillation0
Alleviating LLM-based Generative Retrieval Hallucination in Alipay Search0
DuckSegmentation: A segmentation model based on the AnYue Hemp Duck Dataset0
Small Object Detection: A Comprehensive Survey on Challenges, Techniques and Real-World Applications0
MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation0
Modality-Independent Brain Lesion Segmentation with Privacy-aware Continual LearningCode0
Scaling Down Text Encoders of Text-to-Image Diffusion ModelsCode2
Plug-and-Play Interpretable Responsible Text-to-Image Generation via Dual-Space Multi-facet Concept Control0
Distilling Stereo Networks for Performant and Efficient Leaner NetworksCode0
FedSKD: Aggregation-free Model-heterogeneous Federated Learning using Multi-dimensional Similarity Knowledge Distillation0
CustomKD: Customizing Large Vision Foundation for Edge Model Improvement via Knowledge Distillation0
OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery0
Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs0
Improving Acoustic Scene Classification with City Features0
Efficient Knowledge Distillation via Curriculum Extraction0
InhibiDistilbert: Knowledge Distillation for a ReLU and Addition-based Transformer0
Advancing Deep Learning through Probability Engineering: A Pragmatic Paradigm for Modern AI0
DCA: Dividing and Conquering Amnesia in Incremental Object DetectionCode0
Technical Report for the 5th CLVision Challenge at CVPR: Addressing the Class-Incremental with Repetition using Unlabeled Data -- 4th Place SolutionCode0
KoGNER: A Novel Framework for Knowledge Graph Distillation on Biomedical Named Entity Recognition0
High Temporal Consistency through Semantic Similarity Propagation in Semi-Supervised Video Semantic Segmentation for Autonomous FlightCode1
Distilling 3D distinctive local descriptors for 6D pose estimation0
Ensemble Knowledge Distillation for Machine Learning Interatomic Potentials0
Scale-Aware Contrastive Reverse Distillation for Unsupervised Medical Anomaly DetectionCode0
SCJD: Sparse Correlation and Joint Distillation for Efficient 3D Human Pose EstimationCode0
Uncertainty-Aware Knowledge Distillation for Compact and Efficient 6DoF Pose Estimation0
Real-Time Cell Sorting with Scalable In Situ FPGA-Accelerated Deep LearningCode0
A Comprehensive Survey on Knowledge DistillationCode2
Enabling Weak Client Participation via On-device Knowledge Distillation in Heterogenous Federated Learning0
Exploring Performance-Complexity Trade-Offs in Sound Event Detection ModelsCode1
Creating a Good Teacher for Knowledge Distillation in Acoustic Scene Classification0
Adaptive Temperature Based on Logits Correlation in Knowledge DistillationCode0
CleverDistiller: Simple and Spatially Consistent Cross-modal Distillation0
xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation0
Vi-LAD: Vision-Language Attention Distillation for Socially-Aware Robot Navigation in Dynamic Environments0
Unified Locomotion Transformer with Simultaneous Sim-to-Real Transfer for Quadrupeds0
LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference OptimizationCode2
Show:102550
← PrevPage 5 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified