SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 10011050 of 4240 papers

TitleStatusHype
IOR: Inversed Objects Replay for Incremental Object Detection0
To Distill or Not to Distill? On the Robustness of Robust Knowledge DistillationCode0
LenslessFace: An End-to-End Optimized Lensless System for Privacy-Preserving Face VerificationCode1
Step Out and Seek Around: On Warm-Start Training with Incremental Data0
Mutual Information Guided Backdoor Mitigation for Pre-trained Encoders0
Decision Boundary-aware Knowledge Consolidation Generates Better Instance-Incremental Learner0
Tiny models from tiny data: Textual and null-text inversion for few-shot distillationCode0
PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs0
Adversarial Moment-Matching Distillation of Large Language ModelsCode0
Multi-Task Multi-Scale Contrastive Knowledge Distillation for Efficient Medical Image SegmentationCode1
Optimal Transport Guided Correlation Assignment for Multimodal Entity LinkingCode0
RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models0
DL-KDD: Dual-Light Knowledge Distillation for Action Recognition in the Dark0
Toward Efficient Deep Spiking Neuron Networks:A Survey On Compression0
Decoupled Alignment for Robust Plug-and-Play Adaptation0
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher ModelCode0
Learning Background Prompts to Discover Implicit Knowledge for Open Vocabulary Object Detection0
Multi-label Class Incremental Emotion Decoding with Augmented Emotional Semantics Learning0
Vision-Language Meets the Skeleton: Progressively Distillation with Cross-Modal Knowledge for 3D Action Representation LearningCode0
Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion SamplingCode0
GKT: A Novel Guidance-Based Knowledge Transfer Framework For Efficient Cloud-edge Collaboration LLM DeploymentCode0
Distribution Aligned Semantics Adaption for Lifelong Person Re-IdentificationCode0
Scalable Detection of Salient Entities in News Articles0
Relation Modeling and Distillation for Learning with Noisy Labels0
Improving the Training of Rectified FlowsCode2
Estimating Human Poses Across Datasets: A Unified Skeleton and Multi-Teacher Distillation Approach0
WebUOT-1M: Advancing Deep Underwater Object Tracking with A Million-Scale Benchmark0
BLSP-KD: Bootstrapping Language-Speech Pre-training via Knowledge Distillation0
Forward-Backward Knowledge Distillation for Continual Clustering0
Continual Collaborative Distillation for Recommender SystemCode1
Aligning in a Compact Space: Contrastive Knowledge Distillation between Heterogeneous Architectures0
SLMRec: Distilling Large Language Models into Small for Sequential RecommendationCode1
P4: Towards private, personalized, and Peer-to-Peer learning0
TIMA: Text-Image Mutual Awareness for Balancing Zero-Shot Adversarial Robustness and Generalization Ability0
LoReTrack: Efficient and Accurate Low-Resolution Transformer TrackingCode1
UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation0
Noisy Data Meets Privacy: Training Local Models with Post-Processed Remote Queries0
Rethinking Early-Fusion Strategies for Improved Multispectral Object DetectionCode1
A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation0
Harnessing Increased Client Participation with Cohort-Parallel Federated Learning0
Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasetsCode0
3D Annotation-Free Learning by Distilling 2D Open-Vocabulary Segmentation Models for Autonomous DrivingCode1
Pre-Trained Vision-Language Models as Partial Annotators0
Recurrent Early Exits for Federated Learning with Heterogeneous ClientsCode1
JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis ModelsCode1
Awesome Multi-modal Object TrackingCode5
Efficient Multitask Dense Predictor via BinarizationCode0
AdaGMLP: AdaBoosting GNN-to-MLP Knowledge DistillationCode0
Data-Free Federated Class Incremental Learning with Diffusion-Based Generative Memory0
Joint Optimization of Streaming and Non-Streaming Automatic Speech Recognition with Multi-Decoder and Knowledge Distillation0
Show:102550
← PrevPage 21 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified