SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 41514200 of 4240 papers

TitleStatusHype
Compressing GANs using Knowledge Distillation0
Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks0
Unsupervised Learning of Neural Networks to Explain Neural Networks (extended abstract)0
Learning Efficient Detector with Semi-supervised Adaptive DistillationCode0
Stealing Neural Networks via Timing Side Channels0
Improving the Interpretability of Deep Neural Networks with Knowledge Distillation0
Learning Student Networks via Feature Embedding0
Spatial Knowledge Distillation to aid Visual Reasoning0
Optimizing speed/accuracy trade-off for person re-identification via knowledge distillation0
An Embarrassingly Simple Approach for Knowledge DistillationCode0
Few Sample Knowledge Distillation for Efficient Network CompressionCode0
Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling0
Knowledge Distillation with Feature Maps for Image Classification0
On Compressing U-net Using Knowledge Distillation0
KDGAN: Knowledge Distillation with Generative Adversarial Networks0
Learning to Specialize with Knowledge Distillation for Visual Question Answering0
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks0
Low-resolution Face Recognition in the Wild via Selective Knowledge Distillation0
Structured Pruning of Neural Networks with Budget-Aware Regularization0
Graph-Adaptive Pruning for Efficient Inference of Convolutional Neural Networks0
Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models0
Self-Referenced Deep Learning0
Private Model Compression via Knowledge Distillation0
Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition0
Cogni-Net: Cognitive Feature Learning through Deep Visual PerceptionCode0
A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation0
Block-wise Intermediate Representation Training for Model Compression0
KTAN: Knowledge Transfer Adversarial Network0
LIT: Block-wise Intermediate Representation Training for Model Compression0
Analyzing Knowledge Distillation in Neural Machine Translation0
Knowledge Distillation from Few Samples0
Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender SystemCode0
Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric AnnotationsCode0
MotherNets: Rapid Deep Ensemble Learning0
Label Denoising with Large Ensembles of Heterogeneous Neural Networks0
RDPD: Rich Data Helps Poor Data via ImitationCode0
Lifelong Learning via Progressive Distillation and Retrospection0
Attention-Guided Answer Distillation for Machine Reading Comprehension0
Whole-Slide Mitosis Detection in H&E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks0
Cooperative Denoising for Distantly Supervised Relation Extraction0
SlimNets: An Exploration of Deep Model Compression and AccelerationCode0
Self-supervised Knowledge Distillation Using Singular Value DecompositionCode0
Revisiting Distillation and Incremental Classifier LearningCode0
Distillation Techniques for Pseudo-rehearsal Based Incremental LearningCode0
Gradient Adversarial Training of Neural Networks0
Knowledge Distillation by On-the-Fly Native EnsembleCode0
Coupled End-to-End Transfer Learning With Generalized Fisher Information0
Collaborative Learning for Deep Neural Networks0
A novel channel pruning method for deep neural network compression0
Theory and Experiments on Vector Quantized AutoencodersCode0
Show:102550
← PrevPage 84 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified