SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 28512900 of 4240 papers

TitleStatusHype
The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework0
The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes0
The USYD-JD Speech Translation System for IWSLT 20210
The USYD-JD Speech Translation System for IWSLT20210
The Xiaomi Text-to-Text Simultaneous Speech Translation System for IWSLT 20220
Three Factors to Improve Out-of-Distribution Detection0
TIMA: Text-Image Mutual Awareness for Balancing Zero-Shot Adversarial Robustness and Generalization Ability0
TimeDistill: Efficient Long-Term Time Series Forecasting with MLP via Cross-Architecture Distillation0
TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment0
XtremeDistil: Multi-stage Distillation for Massive Multilingual Models0
TinyViT: Fast Pretraining Distillation for Small Vision Transformers0
TIP: Typifying the Interpretability of Procedures0
TKD: Temporal Knowledge Distillation for Active Perception0
ToDi: Token-wise Distillation via Fine-Grained Divergence Control0
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models0
Tokenizing Electron Cloud in Protein-Ligand Interaction Learning0
Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion0
Topic Modeling for Maternal Health Using Reddit0
Topological Persistence Guided Knowledge Distillation for Wearable Sensor Data0
Topology Distillation for Recommender System0
torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation0
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP0
To Smooth or not to Smooth? On Compatibility between Label Smoothing and Knowledge Distillation0
Toward Data-centric Directed Graph Learning: An Entropy-driven Approach0
Toward Efficient Deep Spiking Neuron Networks:A Survey On Compression0
Toward Fair Graph Neural Networks Via Dual-Teacher Knowledge Distillation0
Toward Model-centric Heterogeneous Federated Graph Learning: A Knowledge-driven Approach0
Toward Multiple Specialty Learners for Explaining GNNs via Online Knowledge Distillation0
Towards a better understanding of Vector Quantized Autoencoders0
Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need0
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval0
Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text0
Towards a Unified View of Affinity-Based Knowledge Distillation0
Towards a Universal Continuous Knowledge Base0
Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search0
Reconsidering Learning Objectives in Unbiased Recommendation with Unobserved Confounders0
Towards Building Secure UAV Navigation with FHE-aware Knowledge Distillation0
Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift0
Towards Comparable Knowledge Distillation in Semantic Image Segmentation0
Towards Cross-modality Medical Image Segmentation with Online Mutual Knowledge Distillation0
Towards Developing a Multilingual and Code-Mixed Visual Question Answering System by Knowledge Distillation0
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation0
Towards Efficient Task-Driven Model Reprogramming with Foundation Models0
Towards Explaining Autonomy with Verbalised Decision Tree States0
Towards Expressive Speaking Style Modelling with Hierarchical Context Information for Mandarin Speech Synthesis0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation0
Towards Full Utilization on Mask Task for Distilling PLMs into NMT0
Towards General and Fast Video Derain via Knowledge Distillation0
CAM-loss: Towards Learning Spatially Discriminative Feature Representations0
Show:102550
← PrevPage 58 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified