SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 38013850 of 4240 papers

TitleStatusHype
Knowledge Distillation for Anomaly Detection0
Knowledge Distillation for Bilingual Dictionary Induction0
Knowledge Distillation of Black-Box Large Language Models0
Knowledge Distillation for Efficient Sequences of Training Runs0
Knowledge Distillation for Efficient Audio-Visual Video Captioning0
Knowledge Distillation for Enhancing Walmart E-commerce Search Relevance Using Large Language Models0
Knowledge distillation for fast and accurate DNA sequence correction0
Knowledge Distillation for Federated Learning: a Practical Guide0
Knowledge Distillation for Image Restoration : Simultaneous Learning from Degraded and Clean Images0
Knowledge Distillation for Improved Accuracy in Spoken Question Answering0
Knowledge Distillation for Incremental Learning in Semantic Segmentation0
Knowledge Distillation for Mobile Edge Computation Offloading0
Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation0
Knowledge Distillation for Multimodal Egocentric Action Recognition Robust to Missing Modalities0
Knowledge Distillation for Neural Transducer-based Target-Speaker ASR: Exploiting Parallel Mixture/Single-Talker Speech Data0
Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models0
Knowledge Distillation for Object Detection via Rank Mimicking and Prediction-guided Feature Imitation0
Knowledge Distillation for Object Detection: from generic to remote sensing datasets0
Knowledge Distillation for Oriented Object Detection on Aerial Images0
Knowledge Distillation for Real-Time Classification of Early Media in Voice Communications0
Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization0
Knowledge Distillation for Reservoir-based Classifier: Human Activity Recognition0
Knowledge Distillation for Road Detection based on cross-model Semi-Supervised Learning0
Knowledge distillation for semi-supervised domain adaptation0
Knowledge Distillation for Small-footprint Highway Networks0
Knowledge Distillation for Speech Denoising by Latent Representation Alignment with Cosine Distance0
Knowledge Distillation for Sustainable Neural Machine Translation0
Knowledge Distillation for Swedish NER models: A Search for Performance and Efficiency0
Knowledge Distillation for Underwater Feature Extraction and Matching via GAN-synthesized Images0
Knowledge Distillation Framework for Accelerating High-Accuracy Neural Network-Based Molecular Dynamics Simulations0
Boosting of Head Pose Estimation by Knowledge Distillation0
Knowledge Distillation from Few Samples0
Knowledge Distillation from Internal Representations0
Knowledge distillation from language model to acoustic model: a hierarchical multi-task learning approach0
Knowledge Distillation from Language-Oriented to Emergent Communication for Multi-Agent Remote Control0
Knowledge distillation from multi-modal to mono-modal segmentation networks0
Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition0
Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer0
Knowledge Distillation Improves Stability in Retranslation-based Simultaneous Translation0
Knowledge Distillation in Automated Annotation: Supervised Text Classification with LLM-Generated Training Labels0
Knowledge Distillation in Deep Learning and its Applications0
Knowledge Distillation in Document Retrieval0
Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutions0
Knowledge Distillation in Generations: More Tolerant Teachers Educate Better Students0
Knowledge Distillation in Vision Transformers: A Critical Review0
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher0
Knowledge Distillation Meets Few-Shot Learning: An Approach for Few-Shot Intent Classification Within and Across Domains0
Knowledge Distillation Methods for Efficient Unsupervised Adaptation Across Multiple Domains0
Knowledge Distillation Neural Network for Predicting Car-following Behaviour of Human-driven and Autonomous Vehicles0
Knowledge Distillation of Convolutional Neural Networks through Feature Map Transformation using Decision Trees0
Show:102550
← PrevPage 77 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified