SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 9511000 of 4240 papers

TitleStatusHype
Densely Guided Knowledge Distillation using Multiple Teacher AssistantsCode1
Knowledge Distillation Using Hierarchical Self-Supervision Augmented DistributionCode1
Knowledge Distillation with the Reused Teacher ClassifierCode1
DE-RRD: A Knowledge Distillation Framework for Recommender SystemCode1
Boosting Light-Weight Depth Estimation Via Knowledge DistillationCode1
Knowledge Distillation via the Target-aware TransformerCode1
Knowledge Distillation with Refined LogitsCode1
Leveraging Topological Guidance for Improved Knowledge DistillationCode0
Leveraging Knowledge Distillation for Efficient Deep Reinforcement Learning in Resource-Constrained EnvironmentsCode0
An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question AnsweringCode0
Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasetsCode0
Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned DataCode0
Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language ModelsCode0
Leveraging Foundation Models via Knowledge Distillation in Multi-Object Tracking: Distilling DINOv2 Features to FairMOTCode0
Leveraging Large Language Models for Active Merchant Non-player CharactersCode0
Bridging Modalities: Knowledge Distillation and Masked Training for Translating Multi-Modal Emotion Recognition to Uni-Modal, Speech-Only Emotion RecognitionCode0
LENAS: Learning-based Neural Architecture Search and Ensemble for 3D Radiotherapy Dose PredictionCode0
Bridging Dimensions: Confident Reachability for High-Dimensional ControllersCode0
Less-supervised learning with knowledge distillation for sperm morphology analysisCode0
Learning without Forgetting for 3D Point Cloud ObjectsCode0
Learn What Is Possible, Then Choose What Is Best: Disentangling One-To-Many Relations in Language Through Text-based GamesCode0
ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake ImagesCode0
Learning to Maximize Mutual Information for Chain-of-Thought DistillationCode0
Leave No One Behind: Enhancing Diversity While Maintaining Accuracy in Social RecommendationCode0
Learning Lightweight Lane Detection CNNs by Self Attention DistillationCode0
Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion RecognitionCode0
Learning from Noisy Crowd Labels with LogicsCode0
Digital Staining with Knowledge Distillation: A Unified Framework for Unpaired and Paired-But-Misaligned DataCode0
An Efficient Memory Module for Graph Few-Shot Class-Incremental LearningCode0
Learning Deep and Compact Models for Gesture RecognitionCode0
An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task LearningCode0
Differentially Private Knowledge Distillation via Synthetic Text GenerationCode0
Born Again Neural NetworksCode0
Learning Efficient Detector with Semi-supervised Adaptive DistillationCode0
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data GenerationCode0
Leaning Compact and Representative Features for Cross-Modality Person Re-IdentificationCode0
Language Model Knowledge Distillation for Efficient Question Answering in SpanishCode0
Language-Universal Adapter Learning with Knowledge Distillation for End-to-End Multilingual Speech RecognitionCode0
Adaptive Temperature Based on Logits Correlation in Knowledge DistillationCode0
Boosting Summarization with Normalizing Flows and Aggressive TrainingCode0
Adaptive Teaching with Shared Classifier for Knowledge DistillationCode0
KS-DETR: Knowledge Sharing in Attention Learning for Detection TransformerCode0
KnowledgeSG: Privacy-Preserving Synthetic Text Generation with Knowledge Distillation from ServerCode0
Detect, Distill and Update: Learned DB Systems Facing Out of Distribution DataCode0
Boosting Residual Networks with Group KnowledgeCode0
Knowledge Transfer Graph for Deep Collaborative LearningCode0
Light Multi-segment Activation for Model CompressionCode0
Learning to "Segment Anything" in Thermal Infrared Images through Knowledge Distillation with a Large Scale Dataset SATIRCode0
Detect, Distill and Update: Detect, Distill and Update: Learned DB Systems Facing Out of Distribution DataCode0
Analyzing the Confidentiality of Undistillable Teachers in Knowledge DistillationCode0
Show:102550
← PrevPage 20 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified