SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 110 of 4240 papers

TitleStatusHype
Visual-Language Model Knowledge Distillation Method for Image Quality Assessment0
Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces0
DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action RecognitionCode0
HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training0
Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning0
Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift0
SFedKD: Sequential Federated Learning with Discrepancy-Aware Multi-Teacher Knowledge Distillation0
KAT-V1: Kwai-AutoThink Technical Report0
The Trilemma of Truth in Large Language ModelsCode0
Layer Importance for Mathematical Reasoning is Forged in Pre-Training and Invariant after Post-Training0
Show:102550
← PrevPage 1 of 424Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified