SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 601650 of 4240 papers

TitleStatusHype
Distilled Semantics for Comprehensive Scene Understanding from VideosCode1
Distilling the Knowledge in a Neural NetworkCode1
Class-relation Knowledge Distillation for Novel Class DiscoveryCode1
Dynamic Temperature Knowledge DistillationCode1
EchoDFKD: Data-Free Knowledge Distillation for Cardiac Ultrasound Segmentation using Synthetic DataCode1
Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge GraphsCode1
Action knowledge for video captioning with graph neural networksCode1
Deliberated Domain Bridging for Domain Adaptive Semantic SegmentationCode1
Dense Interspecies Face EmbeddingCode1
Efficient Knowledge Distillation from Model CheckpointsCode1
Defocus Blur Detection via Depth DistillationCode1
Tracking-by-Trackers with a Distilled and Reinforced ModelCode1
CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as TeachersCode1
CLIP-guided Federated Learning on Heterogeneous and Long-Tailed DataCode1
CLIP-KD: An Empirical Study of CLIP Model DistillationCode1
CLIP model is an Efficient Continual LearnerCode1
Learning Efficient Vision Transformers via Fine-Grained Manifold DistillationCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph DistillationCode1
FocusNet: Classifying Better by Focusing on Confusing ClassesCode1
End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge DistillationCode1
Deformation Flow Based Two-Stream Network for Lip ReadingCode1
Densely Guided Knowledge Distillation using Multiple Teacher AssistantsCode1
DeepKD: A Deeply Decoupled and Denoised Knowledge Distillation TrainerCode1
Deep Graph-level Anomaly Detection by Glocal Knowledge DistillationCode1
CLRKDNet: Speeding up Lane Detection with Knowledge DistillationCode1
Deep Semi-supervised Knowledge Distillation for Overlapping Cervical Cell Instance SegmentationCode1
DeepAqua: Self-Supervised Semantic Segmentation of Wetland Surface Water Extent with SAR Images using Knowledge DistillationCode1
Compressing Deep Graph Neural Networks via Adversarial Knowledge DistillationCode1
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic SegmentationCode1
CMD: Self-supervised 3D Action Representation Learning with Cross-modal Mutual DistillationCode1
Decoupled Multimodal Distilling for Emotion RecognitionCode1
Exploring Deeper! Segment Anything Model with Depth Perception for Camouflaged Object DetectionCode1
Model LEGO: Creating Models Like Disassembling and Assembling Building BlocksCode1
Coaching a Teachable StudentCode1
Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillationCode1
Exploring Performance-Complexity Trade-Offs in Sound Event Detection ModelsCode1
Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation FrameworkCode1
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine TranslationCode1
Adversarially Robust DistillationCode1
Content-Aware GAN CompressionCode1
FastSpeech 2: Fast and High-Quality End-to-End Text to SpeechCode1
Collaborative Distillation for Ultra-Resolution Universal Style TransferCode1
FDCNet: Feature Drift Compensation Network for Class-Incremental Weakly Supervised Object LocalizationCode1
Feature Structure Distillation with Centered Kernel Alignment in BERT TransferringCode1
FedACK: Federated Adversarial Contrastive Knowledge Distillation for Cross-Lingual and Cross-Model Social Bot DetectionCode1
FedDefender: Client-Side Attack-Tolerant Federated LearningCode1
KNOT: Knowledge Distillation using Optimal Transport for Solving NLP TasksCode1
Federated Learning with Extremely Noisy Clients via Negative DistillationCode1
Deep Structured Instance Graph for Distilling Object DetectorsCode1
Show:102550
← PrevPage 13 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified