SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 36013650 of 4240 papers

TitleStatusHype
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural NetworksCode0
Knowledge Distillation for Singing Voice DetectionCode0
TinyBERT: Distilling BERT for Natural Language UnderstandingCode0
Theory and Experiments on Vector Quantized AutoencodersCode0
Knowledge Distillation for Quality EstimationCode0
Whole-slide-imaging Cancer Metastases Detection and Localization with Limited Tumorous DataCode0
Lightweight Self-Knowledge Distillation with Multi-source Information FusionCode0
ThermoStereoRT: Thermal Stereo Matching in Real Time via Knowledge Distillation and Attention-based RefinementCode0
Content Based Singing Voice Extraction From a Musical MixtureCode0
Knowledge Distillation for Multi-Target Domain Adaptation in Real-Time Person Re-IdentificationCode0
LILA-BOTI : Leveraging Isolated Letter Accumulations By Ordering Teacher Insights for Bangla Handwriting RecognitionCode0
A Survey on the Robustness of Computer Vision Models against Common CorruptionsCode0
SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative Networks using cGANsCode0
PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing in CMOS Image SensorsCode0
Knowledge Distillation for End-to-End Person SearchCode0
CONetV2: Efficient Auto-Channel Size Optimization for CNNsCode0
Answering Diverse Questions via Text Attached with Key Audio-Visual CluesCode0
Knowledge Distillation for Detection Transformer with Consistent Distillation Points SamplingCode0
Knowledge Distillation By Sparse Representation MatchingCode0
LIDAR and Position-Aided mmWave Beam Selection with Non-local CNNs and Curriculum TrainingCode0
Domain Adaptable Fine-Tune Distillation Framework For Advancing Farm SurveillanceCode0
SkinDistilViT: Lightweight Vision Transformer for Skin Lesion ClassificationCode0
The State of Knowledge Distillation for ClassificationCode0
SlideGCD: Slide-based Graph Collaborative Training with Knowledge Distillation for Whole Slide Image ClassificationCode0
Knowledge Distillation by On-the-Fly Native EnsembleCode0
TOP-Training: Target-Oriented Pretraining for Medical Extractive Question AnsweringCode0
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual ExplanationsCode0
Slimmable Networks for Contrastive Self-supervised LearningCode0
SlimNets: An Exploration of Deep Model Compression and AccelerationCode0
DOGe: Defensive Output Generation for LLM Protection Against Knowledge DistillationCode0
The Trilemma of Truth in Large Language ModelsCode0
Knowledge Distillation as Semiparametric InferenceCode0
Knowledge Distillation approach towards Melanoma DetectionCode0
Is Smaller Always Faster? Tradeoffs in Compressing Self-Supervised Speech TransformersCode0
LLMQuoter: Enhancing RAG Capabilities Through Efficient Quote Extraction From Large ContextsCode0
Complex Facial Expression Recognition Using Deep Knowledge Distillation of Basic FeaturesCode0
Smaller3d: Smaller Models for 3D Semantic Segmentation Using Minkowski Engine and Knowledge Distillation MethodsCode0
QUEST: Quantized embedding space for transferring knowledgeCode0
KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge DistillationCode0
KDMOS:Knowledge Distillation for Motion SegmentationCode0
Joint Progressive Knowledge Distillation and Unsupervised Domain AdaptationCode0
Localized Symbolic Knowledge Distillation for Visual Commonsense ModelsCode0
Locally Differentially Private Distributed Deep Learning via Knowledge DistillationCode0
Zero-Shot Knowledge Distillation in Deep NetworksCode0
QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person ViewCode0
A Lightweight Target-Driven Network of Stereo Matching for Inland WaterwaysCode0
Visual Relationship Detection with Language prior and SoftmaxCode0
Does Training with Synthetic Data Truly Protect Privacy?Code0
Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-SupervisionCode0
Annealing Knowledge DistillationCode0
Show:102550
← PrevPage 73 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified