SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 151200 of 4240 papers

TitleStatusHype
The Estimation of Continual Causal Effect for Dataset Shifting Streams0
SAM-Guided Robust Representation Learning for One-Shot 3D Medical Image Segmentation0
Trace-of-Thought Prompting: Investigating Prompt-Based Knowledge Distillation Through Question Decomposition0
DS_FusionNet: Dynamic Dual-Stream Fusion with Bidirectional Knowledge Distillation for Plant Disease RecognitionCode0
Federated One-Shot Learning with Data Privacy and Objective-Hiding0
Knowledge Distillation of Domain-adapted LLMs for Question-Answering in Telecom0
Swapped Logit Distillation via Bi-level Teacher AlignmentCode0
Unified Attacks to Large Language Model Watermarks: Spoofing and Scrubbing in Unauthorized Knowledge Distillation0
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs0
Does Knowledge Distillation Matter for Large Language Model based Bundle Generation?0
Emo Pillars: Knowledge Distillation to Support Fine-Grained Context-Aware and Context-Less Emotion Classification0
Distribution-aware Forgetting Compensation for Exemplar-Free Lifelong Person Re-identificationCode1
Turbo2K: Towards Ultra-Efficient and High-Quality 2K Video Synthesis0
Knowledge Distillation and Dataset Distillation of Large Language Models: Emerging Trends, Challenges, and Future Directions0
Empirical Evaluation of Knowledge Distillation from Transformers to Subquadratic Language Models0
Teach Me How to Denoise: A Universal Framework for Denoising Multi-modal Recommender Systems via Guided CalibrationCode1
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models0
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs0
Scaling Laws for Data-Efficient Visual Transfer Learning0
Transferable Deployment of Semantic Edge Inference Systems via Unsupervised Domain Adaption0
Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-ResolutionCode2
Efficient Reasoning Models: A SurveyCode3
Efficient Hybrid Language Model Compression through Group-Aware SSM Pruning0
A Dual-Space Framework for General Knowledge Distillation of Large Language ModelsCode1
Better Estimation of the KL Divergence Between Language ModelsCode1
Digital Staining with Knowledge Distillation: A Unified Framework for Unpaired and Paired-But-Misaligned DataCode0
Can LLMs Revolutionize the Design of Explainable and Efficient TinyML Models?0
Optimizing Multi-Gateway LoRaWAN via Cloud-Edge Collaboration and Knowledge Distillation0
Learning Occlusion-Robust Vision Transformers for Real-Time UAV TrackingCode2
Knowledge Distillation for Underwater Feature Extraction and Matching via GAN-synthesized Images0
Proxy-Anchor and EVT-Driven Continual Learning Method for Generalized Category DiscoveryCode0
Knowledge Distillation for Multimodal Egocentric Action Recognition Robust to Missing Modalities0
Towards Unconstrained 2D Pose Estimation of the Human Spine0
ThermoStereoRT: Thermal Stereo Matching in Real Time via Knowledge Distillation and Attention-based RefinementCode0
SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual Reasoning Self-ImprovementCode2
Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation0
WK-Pnet: FM-Based Positioning via Wavelet Packet Decomposition and Knowledge Distillation0
Teaching pathology foundation models to accurately predict gene expression with parameter efficient knowledge transfer0
GOTHAM: Graph Class Incremental Learning Framework under Weak SupervisionCode0
Resource-Efficient Beam Prediction in mmWave Communications with Multimodal Realistic Simulation Framework0
A Novel Algorithm for Personalized Federated Learning: Knowledge Distillation with Weighted Combination Loss0
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible0
Distillation and Refinement of Reasoning in Small Language Models for Document Re-rankingCode1
Beyond Conventional Transformers: The Medical X-ray Attention (MXA) Block for Improved Multi-Label Diagnosis Using Knowledge DistillationCode0
Causal Self-supervised Pretrained Frontend with Predictive Code for Speech Separation0
Marine Saliency Segmenter: Object-Focused Conditional Diffusion with Region-Level Semantic Knowledge Distillation0
Agglomerating Large Vision Encoders via Distillation for VFSS Segmentation0
UNDO: Understanding Distillation as Optimization0
Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression0
FlowDistill: Scalable Traffic Flow Prediction via Distillation from LLMsCode0
Show:102550
← PrevPage 4 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified