SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 17511775 of 4240 papers

TitleStatusHype
Three Factors to Improve Out-of-Distribution Detection0
Spatio-Temporal Branching for Motion Prediction using Motion IncrementsCode0
Leveraging Expert Models for Training Deep Neural Networks in Scarce Data Domains: Application to Offline Handwritten Signature Verification0
Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video Quality Assessment0
Online Prototype Learning for Online Continual LearningCode1
NormKD: Normalized Logits for Knowledge DistillationCode1
Can Self-Supervised Representation Learning Methods Withstand Distribution Shifts and Corruptions?Code0
Federated Learning for Data and Model Heterogeneity in Medical Imaging0
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis Network via Decoupled Knowledge Distillation and FPGA AccelerationCode1
Sampling to Distill: Knowledge Transfer from Open-World Data0
Subspace Distillation for Continual LearningCode0
Effective Whole-body Pose Estimation with Two-stages DistillationCode4
UPFL: Unsupervised Personalized Federated Learning towards New ClientsCode0
f-Divergence Minimization for Sequence-Level Knowledge DistillationCode1
Incrementally-Computable Neural Networks: Efficient Inference for Dynamic Inputs0
Fitting Auditory Filterbanks with Multiresolution Neural NetworksCode1
Mitigating Cross-client GANs-based Attack in Federated Learning0
MetricGAN-OKD: Multi-Metric Optimization of MetricGAN via Online Knowledge Distillation for Speech EnhancementCode1
A Good Student is Cooperative and Reliable: CNN-Transformer Collaborative Learning for Semantic Segmentation0
HeteFedRec: Federated Recommender Systems with Model Heterogeneity0
CLIP-KD: An Empirical Study of CLIP Model DistillationCode1
Model Compression Methods for YOLOv5: A Review0
DPM-OT: A New Diffusion Probabilistic Model Based on Optimal TransportCode1
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images0
Quantized Feature Distillation for Network Quantization0
Show:102550
← PrevPage 71 of 170Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified