SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 10511100 of 4240 papers

TitleStatusHype
Why Not Transform Chat Large Language Models to Non-English?Code0
HoverFast: an accurate, high-throughput, clinically deployable nuclear segmentation tool for brightfield digital pathology images0
Low-Resolution Chest X-ray Classification via Knowledge Distillation and Multi-task Learning0
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch0
AMFD: Distillation via Adaptive Multimodal Fusion for Multispectral Pedestrian DetectionCode1
Active Object Detection with Knowledge Aggregation and Distillation from Large ModelsCode0
CLRKDNet: Speeding up Lane Detection with Knowledge DistillationCode1
GeoMask3D: Geometrically Informed Mask Selection for Self-Supervised Point Cloud Learning in 3D0
TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment0
Distill-then-prune: An Efficient Compression Framework for Real-time Stereo Matching Network on Edge Devices0
Evolving Storytelling: Benchmarks and Methods for New Character Customization with Diffusion Models0
Efficiency optimization of large-scale language models based on deep learning in natural language processing tasks0
Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction0
Federated Learning for Time-Series Healthcare Sensing with Incomplete ModalitiesCode0
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic AnchorsCode1
Cross-Domain Knowledge Distillation for Low-Resolution Human Pose Estimation0
Hierarchical Selective Classification0
Nickel and Diming Your GAN: A Dual-Method Approach to Enhancing GAN Efficiency via Knowledge Distillation0
INDUS: Effective and Efficient Language Models for Scientific Applications0
Densely Distilling Cumulative Knowledge for Continual Learning0
Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue GenerationCode0
QCRD: Quality-guided Contrastive Rationale Distillation for Large Language Models0
GLiRA: Black-Box Membership Inference Attack via Knowledge DistillationCode0
Meta-Learned Modality-Weighted Knowledge Distillation for Robust Multi-Modal Learning with Missing DataCode0
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting0
Attend, Distill, Detect: Attention-aware Entropy Distillation for Anomaly DetectionCode0
For the Misgendered Chinese in Gender Bias Research: Multi-Task Learning with Knowledge Distillation for Pinyin Name-Gender Prediction0
MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysis0
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks0
Less-supervised learning with knowledge distillation for sperm morphology analysisCode0
CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization0
Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management0
A Review on Discriminative Self-supervised Learning Methods in Computer Vision0
ELiTe: Efficient Image-to-LiDAR Knowledge Transfer for Semantic Segmentation0
GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation0
Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to Probe the Boundaries of Stable Diffusion Generated Data0
Sub-goal Distillation: A Method to Improve Small Language AgentsCode0
Exploring Extreme Quantization in Spiking Language Models0
Semantic Objective Functions: A distribution-aware method for adding logical constraints in deep learning0
Advancing Pre-trained Teacher: Towards Robust Feature Discrepancy for Anomaly DetectionCode1
Efficient Compression of Multitask Multilingual Speech Models0
Error Exponent in Agnostic PAC Learning0
Wake Vision: A Tailored Dataset and Benchmark Suite for TinyML Computer Vision Applications0
CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with Perturbation Strategies and Knowledge DistillationCode1
Distillation Matters: Empowering Sequential Recommenders to Match the Performance of Large Language ModelCode1
Why does Knowledge Distillation Work? Rethink its Attention and Fidelity MechanismCode0
Knowledge Distillation vs. Pretraining from Scratch under a Fixed (Computation) Budget0
Control Policy Correction Framework for Reinforcement Learning-based Energy Arbitrage Strategies0
Revealing the Two Sides of Data Augmentation: An Asymmetric Distillation-based Win-Win Solution for Open-Set Recognition0
Retrieval-Oriented Knowledge for Click-Through Rate PredictionCode1
Show:102550
← PrevPage 22 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified