SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 15511600 of 4240 papers

TitleStatusHype
GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation0
Enhancing CTC-Based Visual Speech Recognition0
Compressing Recurrent Neural Networks for FPGA-accelerated Implementation in Fluorescence Lifetime Imaging0
Feature Adversarial Distillation for Point Cloud Classification0
Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data0
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models0
Feature Alignment-Based Knowledge Distillation for Efficient Compression of Large Language Models0
Feature-Align Network with Knowledge Distillation for Efficient Denoising0
Feature-domain Adaptive Contrastive Distillation for Efficient Single Image Super-Resolution0
Feature-based One-For-All: A Universal Framework for Heterogeneous Knowledge Distillation0
Feature Correlation-guided Knowledge Transfer for Federated Self-supervised Learning0
Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning0
Enhancing Content Representation for AR Image Quality Assessment Using Knowledge Distillation0
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities0
Feature Interaction Fusion Self-Distillation Network For CTR Prediction0
Feature Kernel Distillation0
Enhancing Chinese Multi-Label Text Classification Performance with Response-based Knowledge Distillation0
A Technical Study into Small Reasoning Language Models0
Cost-effective Deployment of BERT Models in Serverless Environment0
Adapting OC20-trained EquiformerV2 Models for High-Entropy Materials0
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation Towards General Sound Classification0
Feature Structure Distillation for BERT Transferring0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
Compressing Image-to-Image Translation GANs Using Local Density Structures on Their Learned Manifold0
Compressing GANs using Knowledge Distillation0
FedAL: Black-Box Federated Knowledge Distillation Enabled by Adversarial Learning0
CoT-Drive: Efficient Motion Forecasting for Autonomous Driving with LLMs and Chain-of-Thought Prompting0
Enhancing Action Recognition from Low-Quality Skeleton Data via Part-Level Knowledge Distillation0
A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone0
GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation0
Enhancing Accuracy and Parameter-Efficiency of Neural Representations for Network Parameterization0
Enhancing Abstractiveness of Summarization Models through Calibrated Distillation0
FedDKD: Federated Learning with Decentralized Knowledge Distillation0
FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks0
CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization0
FedED: Federated Learning via Ensemble Distillation for Medical Relation Extraction0
FedEFM: Federated Endovascular Foundation Model with Unseen Data0
Federated Action Recognition on Heterogeneous Embedded Devices0
Federated Bayesian Neural Regression: A Scalable Global Federated Gaussian Process0
Federated Deconfounding and Debiasing Learning for Out-of-Distribution Generalization0
Compressing Deep Image Super-resolution Models0
Gradient Adversarial Training of Neural Networks0
Enhanced Sparsification via Stimulative Training0
Federated Fine-Tuning of LLMs: Framework Comparison and Research Directions0
Federated Graph Learning with Graphless Clients0
CREFT: Sequential Multi-Agent LLM for Character Relation Extraction0
Enhanced Multimodal Representation Learning with Cross-modal KD0
Federated Knowledge Transfer Fine-tuning Large Server Model with Resource-Constrained IoT Clients0
Federated Learning for Data and Model Heterogeneity in Medical Imaging0
Compressed Meta-Optical Encoder for Image Classification0
Show:102550
← PrevPage 32 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified