SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 576600 of 4240 papers

TitleStatusHype
KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view StereoCode1
Informative knowledge distillation for image anomaly segmentationCode1
FedX: Unsupervised Federated Learning with Cross Knowledge DistillationCode1
Class-incremental Novel Class DiscoveryCode1
Rethinking Data Augmentation for Robust Visual Question AnsweringCode1
Multi-Level Branched Regularization for Federated LearningCode1
Large-scale Knowledge Distillation with Elastic Heterogeneous Computing ResourcesCode1
Re2G: Retrieve, Rerank, GenerateCode1
Contrastive Deep SupervisionCode1
Knowledge Condensation DistillationCode1
HEAD: HEtero-Assists Distillation for Heterogeneous Object DetectorsCode1
Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video SynthesisCode1
FairDistillation: Mitigating Stereotyping in Language ModelsCode1
Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge TransferCode1
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised LearningCode1
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?Code1
The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge DistillationCode1
itKD: Interchange Transfer-based Knowledge Distillation for 3D Object DetectionCode1
Towards Efficient 3D Object Detection with Knowledge DistillationCode1
RLx2: Training a Sparse Deep Reinforcement Learning Model from ScratchCode1
Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box PredictorsCode1
Geometer: Graph Few-Shot Class-Incremental Learning via Prototype RepresentationCode1
Continual evaluation for lifelong learning: Identifying the stability gapCode1
Compressing Deep Graph Neural Networks via Adversarial Knowledge DistillationCode1
Optimizing Performance of Federated Person Re-identification: Benchmarking and AnalysisCode1
Show:102550
← PrevPage 24 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified