SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 501550 of 4240 papers

TitleStatusHype
FedDW: Distilling Weights through Consistency Optimization in Heterogeneous Federated LearningCode0
Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillation0
Expanding Deep Learning-based Sensing Systems with Multi-Source Knowledge Transfer0
Multi-Branch Mutual-Distillation Transformer for EEG-Based Seizure Subtype Classification0
Distillation of Diffusion Features for Semantic Correspondence0
Enhancing CLIP Conceptual Embedding through Knowledge Distillation0
Align-KD: Distilling Cross-Modal Alignment Knowledge for Mobile Vision-Language ModelCode1
Mutli-View 3D Reconstruction using Knowledge DistillationCode0
QABISAR: Query-Article Bipartite Interactions for Statutory Article Retrieval0
Local vs. Global: Local Land-Use and Land-Cover Models Deliver Higher Quality Maps0
Continuous Concepts Removal in Text-to-image Diffusion Models0
Toward Fair Graph Neural Networks Via Dual-Teacher Knowledge Distillation0
Reverse Thinking Makes LLMs Stronger Reasoners0
Headache to Overstock? Promoting Long-tail Items through Debiased Product Bundling0
Puzzle: Distillation-Based NAS for Inference-Optimized LLMs0
Zero-shot Slot Filling in the Age of LLMs for Dialogue Systems0
Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG0
Active Data Curation Effectively Distills Large-Scale Multimodal Models0
Vision Mamba Distillation for Low-resolution Fine-grained Image ClassificationCode1
Improved implicit diffusion model with knowledge distillation to estimate the spatial distribution density of carbon stock in remote sensing imagery0
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data GenerationCode0
Words Matter: Leveraging Individual Text Embeddings for Code Generation in CLIP Test-Time AdaptationCode0
Leveraging Foundation Models To learn the shape of semi-fluid deformable objects0
Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models0
Ensemble Learning via Knowledge Transfer for CTR PredictionCode0
Beyond Task Vectors: Selective Task Arithmetic Based on Importance Metrics0
O1 Replication Journey -- Part 2: Surpassing O1-preview through Simple Distillation, Big Progress or Bitter Lesson?Code7
When Babies Teach Babies: Can student knowledge sharing outperform Teacher-Guided Distillation on small datasets?Code0
Learn from Foundation Model: Fruit Detection Model without Manual AnnotationCode1
TransFair: Transferring Fairness from Ocular Disease Classification to Progression Prediction0
Efficient Ternary Weight Embedding Model: Bridging Scalability and PerformanceCode0
Partial Knowledge Distillation for Alleviating the Inherent Inter-Class Discrepancy in Federated Learning0
Faithful Label-free Knowledge DistillationCode0
BanglaEmbed: Efficient Sentence Embedding Models for a Low-Resource Language Using Cross-Lingual Distillation Techniques0
Adversarial Prompt Distillation for Vision-Language Models0
RankByGene: Gene-Guided Histopathology Representation Learning Through Cross-Modal Ranking Consistency0
Simplifying CLIP: Unleashing the Power of Large-Scale Models on Consumer-level Computers0
Adaptive Group Robust Ensemble Knowledge Distillation0
Improving Mathematical Reasoning Capabilities of Small Language Models via Feedback-Driven Distillation0
Information Extraction from Heterogeneous Documents without Ground Truth Labels using Synthetic Label Generation and Knowledge Distillation0
BiomedCoOp: Learning to Prompt for Biomedical Vision-Language ModelsCode2
WARLearn: Weather-Adaptive Representation LearningCode0
Teaching MLPs to Master Heterogeneous Graph-Structured Knowledge for Efficient and Accurate InferenceCode0
CLFace: A Scalable and Resource-Efficient Continual Learning Framework for Lifelong Face Recognition0
Explainable LLM-driven Multi-dimensional Distillation for E-Commerce Relevance Learning0
RTSR: A Real-Time Super-Resolution Model for AV1 Compressed Content0
What Makes a Good Dataset for Knowledge Distillation?0
Just KIDDIN: Knowledge Infusion and Distillation for Detection of INdecent Memes0
Reward Modeling with Ordinal Feedback: Wisdom of the Crowd0
KDC-MAE: Knowledge Distilled Contrastive Mask Auto-Encoder0
Show:102550
← PrevPage 11 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified