SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 17011750 of 4240 papers

TitleStatusHype
REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments0
3D Face Alignment Through Fusion of Head Pose Information and Features0
Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features0
Fall Detection using Knowledge Distillation Based Long short-term memory for Offline Embedded and Low Power Devices0
Sentence Embedding Models for Ancient Greek Using Multilingual Knowledge DistillationCode1
FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated LearningCode1
Ground-to-Aerial Person Search: Benchmark Dataset and ApproachCode1
DLIP: Distilling Language-Image Pre-training0
Multimodal Locally Enhanced Transformer for Continuous Sign Language Recognition0
Efficient Controllable Multi-Task Architectures0
FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated LearningCode1
SpikingBERT: Distilling BERT to Train Spiking Language Models Using Implicit DifferentiationCode1
Representation Disparity-aware Distillation for 3D Object Detection0
AltDiffusion: A Multilingual Text-to-Image Diffusion ModelCode1
LibreFace: An Open-Source Toolkit for Deep Facial Expression AnalysisCode2
Unlimited Knowledge Distillation for Action Recognition in the Dark0
CCFace: Classification Consistency for Low-Resolution Face Recognition0
Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual LearningCode1
Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation0
Learning Through Guidance: Knowledge Distillation for Endoscopic Image Classification0
SkinDistilViT: Lightweight Vision Transformer for Skin Lesion ClassificationCode0
Radio2Text: Streaming Speech Recognition Using mmWave Radio Signals0
Distilling Knowledge from Resource Management Algorithms to Neural Networks: A Unified Training Assistance Approach0
A Survey on Model Compression for Large Language Models0
Token-Scaled Logit Distillation for Ternary Weight Generative Language ModelsCode1
Multi-Label Knowledge DistillationCode1
Continual Face Forgery Detection via Historical Distribution Preserving0
Complex Facial Expression Recognition Using Deep Knowledge Distillation of Basic FeaturesCode0
Towards General and Fast Video Derain via Knowledge Distillation0
FPGA Resource-aware Structured Pruning for Real-Time Neural Networks0
Sci-CoT: Leveraging Large Language Models for Enhanced Knowledge Distillation in Small Models for Scientific QA0
Multi-View Fusion and Distillation for Subgrade Distresses Detection based on 3D-GPRCode1
AICSD: Adaptive Inter-Class Similarity Distillation for Semantic SegmentationCode1
ConDistFL: Conditional Distillation for Federated Learning from Partially Annotated DataCode2
Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge DistillationCode0
Teacher-Student Architecture for Knowledge Distillation: A Survey0
Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization0
Efficient Temporal Sentence Grounding in Videos with Multi-Teacher Knowledge DistillationCode0
Few-shot Class-Incremental Semantic Segmentation via Pseudo-Labeling and Knowledge DistillationCode0
One-stage Low-resolution Text Recognition with High-resolution Knowledge TransferCode1
Transferable Graph Structure Learning for Graph-based Traffic Forecasting Across CitiesCode1
Scene-aware Human Pose Generation using Transformer0
VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPsCode1
Class Incremental Learning with Self-Supervised Pre-Training and Prototype Learning0
Eyelid’s Intrinsic Motion-aware Feature Learning for Real-time Eyeblink Detection in the WildCode0
Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penaltyCode1
Improved Knowledge Distillation for Crowd Counting on IoT DeviceCode0
A vision transformer-based framework for knowledge transfer from multi-modal to mono-modal lymphoma subtyping models0
Spatio-Temporal Branching for Motion Prediction using Motion IncrementsCode0
Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search0
Show:102550
← PrevPage 35 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified