SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 35013550 of 4240 papers

TitleStatusHype
Learning an Augmented RGB Representation with Cross-Modal Knowledge Distillation for Action Detection0
A distillation based approach for the diagnosis of diseases0
Spatio-Temporal Attention Mechanism and Knowledge Distillation for Lip Reading0
Decoupled Transformer for Scalable Inference in Open-domain Question Answering0
MS-KD: Multi-Organ Segmentation with Multiple Binary-Labeled Datasets0
WeChat Neural Machine Translation Systems for WMT210
Semi-Supervising Learning, Transfer Learning, and Knowledge Distillation with SimCLR0
On Knowledge Distillation for Translating Erroneous Speech Transcriptions0
In-Batch Negatives for Knowledge Distillation with Tightly-Coupled Teachers for Dense Retrieval0
Multi-Strategy Knowledge Distillation Based Teacher-Student Framework for Machine Reading Comprehension0
Samsung R&D Institute Poland submission to WAT 2021 Indic Language Multilingual Task0
The USYD-JD Speech Translation System for IWSLT20210
NAIST English-to-Japanese Simultaneous Translation System for IWSLT 2021 Simultaneous Text-to-text Task0
Trigger is Not Sufficient: Exploiting Frame-aware Knowledge for Implicit Event Argument Extraction0
基于层间知识蒸馏的神经机器翻译(Inter-layer Knowledge Distillation for Neural Machine Translation)0
Matching Distributions between Model and Data: Cross-domain Knowledge Distillation for Unsupervised Domain Adaptation0
POS-Constrained Parallel Decoding for Non-autoregressive GenerationCode0
PRAL: A Tailored Pre-Training Model for Task-Oriented Dialog Generation0
Pose-Guided Feature Learning with Knowledge Distillation for Occluded Person Re-Identification0
On the Efficacy of Small Self-Supervised Contrastive Models without Distillation SignalsCode0
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning0
Using Perturbed Length-aware Positional Encoding for Non-autoregressive Neural Machine Translation0
In Defense of the Learning Without Forgetting for Task Incremental Learning0
Text is Text, No Matter What: Unifying Text Recognition using Knowledge Distillation0
ROD: Reception-aware Online Distillation for Sparse GraphsCode0
IE-GAN: An Improved Evolutionary Generative Adversarial Network Using a New Fitness Function and a Generic Crossover OperatorCode0
The USYD-JD Speech Translation System for IWSLT 20210
Learning ULMFiT and Self-Distillation with Calibration for Medical Dialogue System0
Follow Your Path: a Progressive Method for Knowledge Distillation0
Double Similarity Distillation for Semantic Image Segmentation0
Federated Action Recognition on Heterogeneous Embedded Devices0
Scene-adaptive Knowledge Distillation for Sequential Recommendation via Differentiable Architecture Search0
Technical Report of Team GraphMIRAcles in the WikiKG90M-LSC Track of OGB-LSC @ KDD Cup 20210
Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task0
A Flexible Multi-Task Model for BERT ServingCode0
Contrast R-CNN for Continual Learning in Object Detection0
Lifelong Twin Generative Adversarial Networks0
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation0
WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations0
Confidence Conditioned Knowledge Distillation0
CoReD: Generalizing Fake Media Detection with Continual Representation using DistillationCode0
Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation0
A Light-weight Deep Human Activity Recognition Algorithm Using Multi-knowledge Distillation0
On The Distribution of Penultimate Activations of Classification Networks0
Continual Contrastive Learning for Image ClassificationCode0
Audio-Oriented Multimodal Machine Comprehension: Task, Dataset and Model0
Isotonic Data Augmentation for Knowledge Distillation0
Pool of Experts: Realtime Querying Specialized Knowledge in Massive Neural NetworksCode0
Revisiting Knowledge Distillation: An Inheritance and Exploration FrameworkCode0
Knowledge Distillation for Quality EstimationCode0
Show:102550
← PrevPage 71 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified